►
From YouTube: IETF114-SCIM-20220729-1630
Description
SCIM meeting session at IETF114
2022/07/29 1630
https://datatracker.ietf.org/meeting/114/proceedings/
A
A
B
C
A
A
A
All
right,
can
you
see
the
the
slides
that
are
being
shared.
A
Excellent:
okay:
let's
go
ahead
and
get
started,
welcome
to
the
last
session
and
the
last
day
of
the
ietf
114.
yay.
So
you
are
in
this
really
simple
cloud:
identity
management.
So
I'm
not
going
to
read
the
official
name,
because
that's
what
we
do
so
this
is
what
we
are
covering
in
the
session
if
you're
not
expecting
to
talk
about
skin
you're,
probably
in
the
wrong
session
next
slide
by
now,
you
should
all
be
very
well
familiar
with
how
the
ietf
works
and
the
participation
which
we
note
in
the
note.
A
I
do
want
to
provide
some
meeting
tips
and
I
know
it's
the
last
day,
but
we've
been
asked
as
chairs
to
remind
everybody
of
the
meeting
tips
so
for
those
who
are
here
physically
in
the
venue,
make
sure
to
sign
into
the
session
via
either
the
little
video
icon
or
the
on-site
tool
icon
on
the
webpage
under
skim.
A
That
way,
there's
some
binding
that
acknowledges
that
you
were
present
in
the
session.
Please
use
the
meat
echo
to
join
the
mcq,
especially
those
that
are
here
present
that
way
we
can
respect
the
order
for
those
that
are
participating
remotely
as
well.
For
those
here
who
are
present
turn
off
your
audio
and
video.
That
way,
we
don't
get
feedback
and
the
other
important
one
is.
A
A
A
Okay,
I
think
we're
all
friendly
in
this
room.
So
there's
this
is
just
a
reminder
for
the
code
of
conduct
treat
each
other
with
respect.
Keep
it
professional
keep
the
discussion
on
point
next
slide.
A
Okay,
so
do
the
orders
at
hand?
Thank
you
judas
and
pam
for
being
the
jabber
scribes.
Really
the
note
tapers,
I
guess
I
can
update
it
now
since
we're
no
longer
using
jabber
the
minutes.
The
link
is
there
for
those
who
want
to
track
the
note
takers
and
feel
free
to
augment.
If
you
see
things
that
are
missing,
so
anybody
can
go
in
and
edit
the
minutes
as
well.
The
meeting
material
there's
the
link
there,
and
hopefully,
if
you're
already
here,
you
know
the
meet
echolink
next
slide
for
the
agenda.
A
Today
we
do
have
a
pack
of
set
of
items
that
we
want
to
cover,
rather
than
going
through
the
actual
drafts.
I've
asked
pam
and
phil,
who
has
the
first
adopted
draft
that
being
skim
events
to
just
give
a
brief
update
of
where
we
are
with
those
particular
documents
for
the
protocol
and
schema.
We've
had
a
lot
of
discussion
on
the
particular
items
that
we
want
to
focus
on.
A
So
rather
than
just
saying,
we
don't
quite
have
a
draft
yet
danny
and
janelle.
If
she's
on
will
talk
about
the
different
particular
features,
capabilities
use
cases
that
need
to
be
addressed,
given
that
there
was
sufficient
discussion
on
the
email
thread
and
how
we
do
beyond
the
actual
bootstrapping
or
provisioning,
how
we
do
the
updates,
the
scalability
and
performance
for
those
updates.
A
That's
my
terminology.
In
the
thread
there's
been
discussions
about
filtering
how
we
do
coordination.
We've
allotted
more
time
to
talk
about
that
and
the
other
term
that
would
use
pet
was
pagination.
So
we'll
spend
a
significant
amount
of
time
doing
that
phil.
I
forgot
to
update
the
agenda.
That
was
my
day.
I
was
going
to
update
the
slides.
So
what
is
in
the
actual
agenda?
A
Under
the
the
mean
echo,
is
correct,
so
phil
we're
giving
you
five
minutes
to
just
give
an
update
on
the
draft
itself,
but
we're
giving
you
15
minutes
so
phil
has
uploaded
it
in
the
meeting
materials.
A
So
any
comments
or
updates,
and
I
apologize,
I
meant
to
update
the
I
did
that
anyway.
I
don't
know
why
it
didn't
go
on
the
agenda,
if
not.
A
F
Hi
everybody.
Can
you
hear
me?
Okay,
looks
good.
Yes,
yes,
so
the
number
one
news
is.
We
have
a
use
cases
document,
but
where
we
have
it
in
hack,
md
format.
Right
now,
the
link
is
going
to
go
in
the
notes,
and
so
we're
not
going
to
discuss
it
today,
because
obviously
no
one's
had
a
chance
to
review
it,
but
we
will
start
the
process
right
away
of
reviewing
that
document
on
the
on
the
list.
So
it's
work
that
we
literally
got
done
here
at
ietf.
F
Let
me
just
tell
you
quickly
about
the
rationale
that
we're
using.
This
was
part
of
the
review.
We
did
at
the
last
the
last
plenary
as
well.
What
you're
going
to
see
in
there?
The
is
terminology:
we've
tried
to
define
terminology
that
was
not
defined
in
the
protocols
themselves,
so
in
rfc,
76,
43
or
44..
F
So
you're
going
to
see
a
definition
of
a
of
a
skim
service
provider
right
and
of
a
skim
client
and
of
what
skim
schema
is
and
of
what
the
skim
protocol
is,
where
possible,
we've
taken,
those
from
the
protocols
copying
them
across
and
then
we're
trying
to
boil
them
down
to
make
them
very
brief,
but
they
may
be
wrong
and
there
may
be
cases
where.
Actually
there
is
a
definition
and
it
just
didn't
come
up.
As
the
authors
of
this,
this
new
draft
were
casting
about.
F
There
are
also
definitions,
so
there's
basically
two
types
of
definitions.
One
is
you
know
the
things
used
in
the
protocols.
The
second
one
is
industry
definitions
that
would
be
relative
to
the
use
cases
where
people
might
implement
skim.
So
those
are
absolutely
up
for
debate.
I
put
in
slightly
provocative
definitions
so
that
people
would
know
to
to
react
to
them,
so
don't
be
afraid
to
go
in
and
change
them.
F
We
that's
the
whole
ideas
that
we
iterate
and,
and
then
the
second
piece
is
about
so
the
concepts
are
generally
around
units
of
work
that
implementers
might
want
to
perform,
and
then
the
last
piece
is
business
scenarios
and
the
goal
there
is
to
define
end-to-end
activities
that
an
implementer
might
look
at
and
say
yes,
this
is
something
I
want
to
do
so.
The
link
is
in
the
in
the
meat
echo.
I
will
leave
it
at
that.
F
So
we
stay
on
time
and
if
anyone
wants,
you
know
to
take
the
pen
or
do
a
a
large
revision,
we're
going
to
have
the
hack
md
open
for
a
couple
of
hours
so
that
anyone
who's
here
and
has
time,
can
you
do
it
themselves
or
even
come
talk
to
us
so
danny
and
I
are
the
original
originators
and
then
we
did
do
a
review
in
the
skim
side
meeting
yesterday,
so
others
have
had
who
happen
to
be
in
the
room,
have
had
some
review
of
it
how's
that
any
any
questions
before
I
get
kingdom
off
the
stage
anyone
online
who
would
like
to
ask
a
question,
feel.
F
Okay,
perfect
looking
forward
to
feedback
on
the
list,
then.
A
F
G
Just
one
quick
addition
to
what
pam
said:
yeah
we'll
leave
the
hackmd
open
for
a
bit
and
then
either
today
or
monday
or
somewhere
in
between
we'll
publish
the
first
version
to
the
data
tracker.
Okay,.
A
Yep
just
a
reminder
when
you,
when
you
put
it
in
the
data
tracker
just
post
to
the
mail
list
and
solicit
feedback,
we.
G
Hello,
everybody
can.
G
Good
cool,
okay,
so
yeah
is
janelle
in
the
chat.
I
didn't
see
her,
I'm
assuming
cool
okay,
so
yeah
my
name's
danny
zolner,
along
with
janelle,
who
unfortunately
couldn't
be
here
today.
We
have
sort
of
thrown
our
hats
in
the
ring
to
be
the
editors
for
a
big
body
of
work
that
comprises
changes
in
additions
to
the
skim,
schemas
and
protocols,
both
eventually
the
main
documents
themselves,
as
well
as
helping
to
shepherd
a
bunch
of
extensions
that
will
add
new
functionality
next
slide,
please!
G
So
our
rough
agenda
for
this
chunk
of
time
that
we
have.
We
have
some
topics
that
we'd
like
to
talk
about
that
do
currently
have
drafts,
although
several
of
them
have
expired,
one
or
two
have
been
re-you,
know
revived
in
the
past
day
or
two
and
then
there's
a
whole
bunch
of
topics
that
we'll
move
on
to
that.
G
Don't
currently
have
drafts
and
just
at
a
conceptual
level,
I'd
like
to
talk
about
see
if
anybody
has
opinions
interest
and
if
anybody
wants
to
stand
up
and
you
know
volunteer
to
either
you
know
author
edit
just
contribute
ideas,
whichever
so
yeah.
We
have
this
agenda
here
and
we
can
just
move
on
to
the
next
slide
and
we'll
dive
straight
into
cursor-based,
pagination,
so
yeah.
First,
our
section
of
topics
is
drafts.
Next
slide,
please.
G
So
we
do
have
a
draft
that's
out
there.
It's
expired
currently
by
matt
peterson
on
cursor-based
pagination.
Their
application
is
listed
here
at
a
high
level.
It
introduces
a
few
new
parameters
for
queries
and
response
attributes,
so
things
like
cursor
count
next
cursor
and
previous
cursor
and
the
at
the
high
level.
The
use
case
is
to
improve
sort
of
you
know
massive
scale
operations
when
a
client,
interacting
with
a
skim
service
writer,
needs
to
be
able
to
traverse
a
very
large
set
of
results.
G
G
You
know
just
get,
users
count
equals
10
and
the
key
thing
is
highlighted
there,
where
the
additions
would
be
that
next
cursor
and
the
cursor
when
provided
in
the
query
url
for
a
subsequent
query,
would
return
another
set
of
results.
G
So
I
guess
I'll
pause
at
this
point.
If
anybody
has
any
comments
or
questions,
I
believe
we
have
time
carved
out
later
specifically
to
talk
about
pagination,
so
actually
probably
just
get
this
slide
but
yeah.
I
know
there's
been
some
discussion
on
the
mailing
list
about
this
approach
versus
a
skim
events
based
approach,
but
we
can
cover
that
in
the
allotted
time
later
on
in
the
agenda.
I
guess
so
without
any
questions
in
the
queue.
I
think
we
just
moved
to
the
next
slide.
G
Please,
and
so
next
one
of
the
topics-
that's
in
the
charter
is
multi-valued,
attribute,
pagination
and
filtering
phil
hunt
who's
on
the
you
know
on
the
call
he
wrote
this
draft
phil
if
you
would
prefer
to
speak
to
this
at
any
point,
feel
free.
Okay
fills
in
the
queue.
D
Yeah,
it's
still
there.
I
think
it's
based
on
extending
the
complex
multi-value
attribute,
sub-attribute
filtering
and
applying
the
same
technique
to
the
attributes
parameter
and,
in
addition
to
the
attribute,
sub-attribute
filters,
then
you
can
also
say
what
what
pages
of
that
you
want.
D
So
those
two
things
kind
of
go
together,
and
it's
really
in
a
case
of
the
group
saying
hey
we
like
it
or,
if
there's
enough
interest
to
do
it,
the
only
complex
thing
that
really
came
up
in
the
past
when
we
first
proposed
it
was
how
to
deal
with
knowing
how
many
rows
there
are
and
that's
up
for
some
discussion
and
that's
probably
about
the
only
thing
we
have
to
sort
through
on
that
spec.
B
E
G
All
right,
yeah,
thank
you
phil,
so
yeah.
The
most
common
example
of
where
multi-value
attribute
pagination
would
come
in
would
be
if
you
have
a
group
with
a
million
members,
and
you
don't
want
to
receive
one
response
with
a
list
of
all
million
members,
but
instead
break
it
up
into
smaller
chunks,
and
that's
exemplified
in
the
request
on
the
right
hand.
G
Side
of
this
I
know
from
the
from
the
mailing
list,
there's
also
been
some
feedback
from
matt
peterson,
who
again
cannot
be
here
today
on
a
different
approach
which
would
be
to
represent
group
memberships
as
sort
of
as
two
different
sets
of
resources
like
slash
group,
memberships
and
slash
user
groups
to
represent
the
memberships
in
a
group
and
the
groups
that
a
user
is
a
member
of
respectively.
G
D
Hi
there
already
is
an
attribute
under
the
user
profile
called
groups
which
you
can
find
out
what
groups
a
user
is
a
member
of
so
we
don't
need
a
new
object
for
that.
That's
already
inspect.
G
I
think
he
was
talking
about
something
slightly
different.
It
was
an
approach
to
not
need
a
multi-value
activity.
G
G
Those
are
the
multi-valued
attributes
on
an
object,
and
so,
when
you
have
a
large
series
of
results,
you
run
into
the
situation
where
you
would
need
the
multi-valued
attribute
pagination
graphs,
whereas
the
proposal
from
matt
peterson
wants
to
do
new
top
level
skin
resources
that
represented
the
memberships
either.
You
know
as
members
of
a
group
or
groups
that
they
use
as
resources
or
like
results
that
we've
returned.
So
these
existing
pages
and
logic
rather.
G
So
yeah
anyways
at
a
high
level-
that's
this
topic
at
least
the
next
slide.
Please.
G
G
G
Gotten
this
one
wrong.
I
know
you
were
an
author
on
this,
even
if
cortez
has
the
name
on
the
graph.
G
G
Is
is
soft
deleted
equals
true,
I
guess
rather
than
you
know,
attributes
are
or
whatever
it
is,
the
the
other
form
of
filtering
we're
explicitly
calling
like
a
filter
rather.
A
So
danny,
let's
pause
for
a
couple
minutes
because
phil,
unless
you
were
able
to
capture
it,
you
can
go
ahead
and
respond.
But
I'm
gonna
ask
danny
to
pause
for
a
couple
minutes
because.
D
My
understanding
is
that
the
issue
right
now
is
that
the
skim
contract
says
skim
protocol,
which
is
essentially
alcohol
contract,
says
that
once
you
delete
a
resource,
if
you
try
to
do
a
get
resource,
you're
supposed
to
return
a
404,
it's
supposed
to
act
like
it's
gone
and
I
think
the
idea
of
a
of
a
parameter
that
that
sort
of
says
yes,
I
know
I'm
querying
a
deleted
resource,
but
I
want
you
to
return
it
to
me
anyway.
That's
basically
essentially,
what's
going
on,
I
haven't
delved
much
further
into
it
than
that.
D
It
was
really
something.
Morteza
was
looking
for.
There
may
be
other
ways
to
do
this,
but
if
you,
the
key
is
that,
if
you're
doing
a
get
against
an
object
that
was
deleted,
the
server
would
normally
say
404
not
found.
That's
what
it's
supposed
to
do.
That's
what
we
decided
so
now
we're
we're
trying
to
say
yes,
but
also
check
to
see
if
it
was
a
soft
delete
and
return
it.
If
it
is
so,
we
have
to
have
something
to
override
the
behavior.
A
We're
still
listening
phil,
it's
just
I'm
pausing,
because
we
have
two
av
people.
D
D
That's
something
that
the
spec
would
have
to
define,
because
the
first
delete
that
you
did,
which
is
an
http
delete,
turned
it
into
a
soft
delete,
because
your
server
supports
that
then
you'd
have
to
ask
the
question.
Well,
if
I
delete
it
again,
does
that
wipe
it
out?
So
that's
something
the
document
would
need
to.
Spec
would
need
to
describe.
G
A
Okay,
so
pam
wants
to
speak,
but
as
a
minute
taker,
she
hasn't
had
the
chance.
So
I'm
gonna
give
her
a
pass.
Go
ahead.
Pam.
F
A
G
G
G
J
G
For
normal,
hard
deletion,
both
would
probably
be
represented
inside
of
service
rider
config
elements
as
part
of
that.
Okay.
J
No
totally
fine,
michael
brock,
here
yeah,
I
just
wanted
to
state
the
clear
need
for
a
soft
delete,
because
there
are
a
lot
of
audit
log
type.
A
G
Yeah
and
as
a
chair,
would
you
say
that
that
needs
to
be
figured
out
in
the
individual
graph
that
we're
submitted
or
potentially.
D
D
So
the
client
asks
for
delete
and
the
protocol
says
that,
as
far
as
the
client
is
concerned,
the
object
is
deleted
now,
whether
it
actually
is
or
not,
that's
only
something
the
service
provider
knows
so
really.
The
issue
is:
why
does
the
client
need
to
request
a
soft
delete
or
not?
That's
a
policy
of
the
service
provider,
not
of
the
client.
D
So
maybe
we
need
to
work
on
the
use
case,
because
current
signaling
as
a
contract,
is
if
the
client
asks
for
delete.
As
far
as
it
knows,
it's
a
hard
delete,
whether
it
actually
is
deleted
or
not.
That's
there
and
the
requirement
I
understood
before
was
really
the
case
is
during
life
cycle
management
of
the
user.
You
want
to
open
up
the
opportunity
to
re
resurrect
the
user
to
maintain
the
resource,
identifiers
and
stuff,
so
that
links
don't
get
broken.
D
It's
just
that
the
workflow
on
the
server
will
say:
okay,
I've
already
got
a
match
for
that,
based
on
hidden
data
that
it
knows
about,
and
you
resurrect
that
account
and
now
that
account
just
exists
so
the
next
time
the
client
queries
it
it's
there.
So
I'm
not
sure.
I
understand
why
a
client
needs
to
flag
flag.
It.
H
H
G
G
It
aaron
I
I
would
agree,
we're
not
defining
the
actual
mechanism
of
sort
of
soft
deletion
and
what
it
means
to
the
application,
but
rather
if
the
application
has
their,
you
know
their
own
concept
of
soft
deletion,
that
that
term,
at
least
is,
I
think
at
least
somewhat
in
agreement.
You
know
across
a
lot
of
identity
systems
might
be
a
too
controversial
statement.
But
at
that
point
that
it's
this
is.
This
draft
is
to
finding
a
way
to
for
the
client
to
learn
that,
from
the.
F
So
one
thing
I
think
we
have
to
keep
in
mind
is
that
in
a
world
where
the
skim
server
is
always
the
authoritative
source
and
the
skim
client
is
always
not,
then
that's
fine,
but
in
our
case
like
we
now
have
cases
where
the
skim
client
can
be
the
authoritative
server
right.
So
so
the
question
becomes,
if
you
know,
for
example,
if
it's
your
idas
platform,
that
is
the
skim
client
right.
There
could
very
well
be
a
use
case
for
them
to
to
be
more
prescriptive
and
to
demand
a
hard
delete.
K
I
think
this
would
need
more
details
on
the
use
case,
because
even
for
that
sort
of
resurrecting
a
temporarily
disabled
user
case,
like
there's
already
the
status
field,
that
we
have
on
objects
that
can
be
used
for
that.
So
I
would
be
curious
to
say
what
specifically
would
need
that
soft
delete
info
as
opposed
to
that.
D
Yeah,
I
was
going
to
add
that
that
it
might
not
be
so
much
as
a
special
flag
or
we
could
look
at
it,
but
it
might
be
that
when
you
go
to
do
an
ad,
you
want
to
say
I
want
to
re-add
with
the
I
want
to
create
the
record,
but
I
want
to
resurrect
the
old
identifier.
That's
really
the
issue,
because
under
skim
protocol
right
now,
when
you
create
a
user,
the
skim
server
is
required
to
assign
the
identifier
where
the
client
can't
say
it.
What
the
client's
saying
is.
G
Does
anybody
else
have
any?
I
guess
comments,
questions
on
soft
deletion
or
should
we
move
on.
G
I
think
we
should
just
go
ahead
and
move
on.
Okay.
Also,
I
guess
test
people
remote.
Is
this
microphone
working?
Okay,.
A
G
Go
ahead,
cool,
oh
boy!
This
is
small
text
for
for
being
here,
so
I
I
this
slide
has
a
few
suggested
additions
for
soft
deletion.
I
think,
given
the
current
state
of
we're,
not
sure
that
this
is
the
right
fit
just
at
a
concept
level,
we
could
probably
skip
this
slide
if
anybody's
interested,
they
can
go
review.
The
meeting
materials
later.
G
G
This
is
a
draft
that
I
wrote
late
last
year
on
roles
and
entitlements.
The
url
is
definitely
wrong.
There
oops,
but
at
a
high
level,
this
draft
aims
to
add
two
new
resource
types
to
the
to
the
schema,
which
would
be
slash,
roles
and
slash
entitlements.
G
The
purpose
of
that
is
to
provide
a
way
for
a
skin
client
to
go
and
query
a
list
of
all
of
the
available
values
for
either
roles
or
entitlements.
That
would
be
accepted
for
the
the
respective
values
on
the
the
user
resource.
G
So
a
problem
that
exists
today
is
especially
in
applications
that
are
linked
to
a
skim
server,
where
you
know
what
we'll
call
them
the
multi-tenanted
applications.
Typically
they're.
You
know
sas
in
nature.
G
The
customer
or
like
on
a
per
tenant
basis
can
customize
what
roles
are
available
in
their
application,
and
the
skim
client
today
has
no
like
protocol
or
schema,
or
you
know,
skim
standard
way
to
go
and
discover
what
are
the
available
values
that
will
work
in
these
requests,
and
it
therefore
either
creates
sort
of
an
out
of
management
problem
where,
if
a
new
role
is
created
in
the
app,
it
also
has
to
be
created
somewhere
else
or
it
leads
to
a
whole
bunch
of
failed
requests.
G
When
the
skim
client
doesn't
have
the
correct
data
and
is
sending
requests
to
the
skim
server
that
are,
you
know
deemed
invalid
due
to
you
know,
disallowed
values
on
the
right
hand,
side.
There's
an
example.
Just
you
know
it's
it's
a
resource.
Very
basic
has
things
like
you
know,
value
display,
just
as
the
the
the
user
resources
roles
or
entitlement
attribute
would
on
that
side.
I
two
nights
ago
published
a
new
version
which
is
currently
at
increment,
zero.
G
Two
that
actually
has
a
few
new
features
as
well.
G
I
didn't
include
those
on
this
slide,
but
it
those
features
revolve
around
sort
of
role
or
entitlement
hierarchy
in
the
event
that
there
might
be
a
role
that
is
actually
made
up
of
several
smaller
or
like
less
permissioned
roles,
so
that
you
can
understand
sort
of
how
the
different
like
how
the
structure
of
of
permissions
or
roles
or
licenses
or
whatever
the
thing
that
is
being
represented,
works
in
that
application,
as
well
as
a
few
attributes
that
help
to
represent
the
availability
of
those
on
a
numerical
basis.
G
It's
more
towards
probably
the
use
case
of
entitlements.
Thinking
of
instances
where
entitlements
may
be
representing
a
paid
license
in
a
service,
and
you
may
only
have
100
or
1000
or
whatever
number
of
seats.
So
the
ability
to
know
what
is
the
total
number
of
users
that
can
have
this
value
and
how
many
currently
have
it
are
also
added
in
the
zero
two
version.
G
And
that's
all
I
have
to
say
on
this
one.
If
anybody
has
any
questions.
G
And
to
so
to
close
out
that
section,
I
I
think
the
hope
is
that
for
some
of
these
topics,
maybe
we
slow
down
a
little
on,
say
soft
deletion
and
figured
out
as
a
group,
given
that
there
are
current
drafts.
G
I
would
like
discussion
to
pick
back
up
on
the
the
mailing
list
and
for
us
to
get
to
a
point
where
you
can
do
calls
for
adoption
on
some
of
these
existing
drafts
or
figure
out
why
you
know
what
they
can't
be
adopted
and
go
fix
those
problems
so
that
we
can
start
as
the
working
group
having
more
adopted
drafts
to
work
through
so
just
keep
an
eye
out
on
the
mailing
list.
G
There
will
be
some
emails
pertaining
to
most
of
these
drafts
that
we
just
covered
in
the
next
few
weeks.
Next
slide,
please
so
going
through
some
other
topics,
so
these
I
think,
for
the
most
part
align
with
the
charter
that
we
have
for
the
working
group
today,
however,
there
are
no
drafts
that
have
been
written,
so
this
first
one
is
around
change,
detection
or
delta
query.
The
use
case
would
be
similar
to
the
you
know:
crystal-based
pagination.
G
It's
a
tool
to
help
with
large
scale,
sort
of
manipulation
and
tracking
of
data.
It's
particularly
needed
in
pool
based
scenarios
where
the
the
data
is
sort
of
maintained
and
changes
on
the
scam
service
provider
and
is
then
being
retrieved
for
some
other
purpose
by
the
skim.
Client
such
as
you
know,
a
human
resources
provider
where
their
data
is
being
retrieved
for
use
elsewhere.
G
Currently,
there's
and
I
think,
an
option
to
do
a
get
based
on
the
meta
dot
last
modified
attribute
to
detect
changes,
however,
that
doesn't
fit
all
use
cases
as
systems
that
are
sort
of
like
distributed
systems
such
as
a
lot
of
you
know.
Cloud
like,
as
a
service
systems
may
have
time
drift
that
causes
problems
with
getting
extremely
accurate
sets
of
results
based
on
time.
G
So
with
the
the
delta
query,
it
would
help
to
to
provide
a
way
to
accurately
get
all
changes
since
the
last
time
that
a
request
was
generated.
The
example
at
the
bottom
is
just
one
possible
format
I
could
take.
You
know,
get
users
with
the
parameter
of
delta
token
equals
and
then
a
randomly
generated
grid
that
I
put
in
there
there.
I
know
in
the
mailing
list
there
have
been
some
other,
you
know
conversations
and
there
are
other
thoughts
on
how
to
approach
this.
G
J
Yeah
thanks
aaron
just
wanted
to
speak
to
the
usefulness
of
this,
especially
when
you
look
at
some
of
the
bulk
update
type
nature
of
the
items
we're
dealing
with
or
mapping
over
to
real.
C
World
type
scenarios-
digital
identities-
it
can
be
quite
useful
to
be
able
to
say
hey.
I
need
to
see
every
company,
for
instance,
that,
like
an
entity
was
registered
for
an
identity
was
registered
for
since
the
last
time
there
was
an
update
or
some
event
that
occurred
right.
So
it's
a
highly
highly
useful
law
capability.
So.
G
Okay,
so
switching
microphones
cool
thanks,
if
you
guys
so
yeah
the
the
next
topic
would
be
the
human
resources
schema.
This
is
also
part
of
our
charter.
G
Given
the
close
relationship
that
data
originating
from
human
resources,
our
human
capital
management
providers
tends
to
have
with
other
identity
systems,
and
we
are
you
know,
system
for
cross
identity
management.
After
all,
there's
a
desire
to
get
a
unified
generic
human
resources
schema
for
skim
so
that
human
resources
providers
can
start.
You
know
labeling
the
same
sets
of
data
that
they
may
have
in
the
same
way,
rather
than
everybody.
You
know
labeling
different
attributes,
different
things
when
they
serve
the
same
purpose,
so
this
one
does
not
have
a
draft.
G
I've
mentioned
this
recently
on
the
mailing
list.
I
believe
one
of
the
critical
things
that
we
will
need
here
is
to
get
involvement
from
a
significant
number
of
human
resources
providers
to
provide
their
feedback
on.
You
know
sort
of
the
shape
of
this
schema
as
if
a
if
it's
just
sort
of
identity,
knowledgeable
people
who
don't
necessarily
exist
in
the
human
resources
or
human
capital
management
world.
G
We
might
get
it
wrong
and
it
might
not
actually
help
with
the
problem
we're
trying
to
solve
any
questions
on
this.
One.
G
Okay,
in
that
case,
I
will
move
to
the
next
one.
Please.
G
So
the
next
topic
would
be
account
status
context,
and
so
this
lines
up
with
the
discussion
on
soft
deletion
pretty
well.
Currently,
the
the
only
real
information
that
you
have
about
a
user's
status
in
a
lot
of
ways
is
the
active
attribute
which
is
a
boolean.
So
it's
active,
true
or
false.
There's
a
proposal
to
expand
this
out,
perhaps
with
a
new,
let's
say,
complex,
attribute
to
support
active
that
would
be
called
something
like
account
status
where
you
can
see
things
about
that
user.
G
Are
they
a
pre-hire?
Are
they
on
leave,
unpaid
or
paid?
Have
they
been
terminated?
There's,
I
think
a
desire
to
align
some
of
these
states
with
states
from
the
shared
signals
community
as
well,
but
yeah.
So
this
is
a
topic
that
we
would
like
to
see
a
draft
for
as
well,
and
I
believe
it
is
also
a
part
of
our
charter.
G
I'll
pause
for
five
seconds,
for
you
know,
hands
to
go
up
otherwise
next
slide,
please,
okay,
so
the
I
think
we're
down
to
the
last
two
slides
for
this
section.
So
there's
an
improvement
to
the
protocol
that
I
would
like
to
see
around
reference
urls,
and
there
have
been
discussions
in
some
of
our
interim
meetings
before
possibly
also
on
the
mailing
list.
G
So
the
key
example
that
that
I
have
would
be
the
photos
attribute
in
the
core
user
schema,
so
the
photos
attribute
is
a
complex
attribute,
but
underneath
that
sort
of
the
main
sub
value
in
that
complex
attribute
is
an
attribute
of
a
data
type
that
is
called
reference
which
there
are
actually
very
few
of
in
the
in
this
comes
back
relative
to
things
like
you
know
strings.
G
So
a
reference
attribute
needs
to
point
to
another
resource
somewhere
and
there's
gonna
be
multiple
types
there
in
the
schema
spec,
you
can
go.
Look
it
up.
The
problem
with
things
that
are
of
url
formats,
specifically
like
the
url
to
somebody's
profile
picture,
is
that
in
a
cloud
like
sas
internet-based
world,
the
systems
are
communicating
over
the
internet.
G
So
if,
as
a
cloud
idp,
I
am
on
one
side
of
a
transaction,
that's
happening
over
the
internet
and
I
am
communicating
all
the
user
profile
pictures
for
an
organization
vscam
and
I'm
giving
urls
the
service
provider.
That
is
being
told.
Those
urls
will
then
need
to
go
back
and
ask
for
the
pictures
and
the
spec
doesn't
actually
clearly
specify
today
the
existing
spec
on
what
happens
after
those
urls
are
provided.
G
Does
the
app
that's
consuming
them
just
sort
of
hot
linked
to
them
forever,
or
is
the
skim
service
provider
expected
to
do
like
a
fetch
and
then
store
it
somewhere
locally
on
their
side?
So,
if
you
know
the
cloud
app
represented
by
the
service
provider
wants
to
you
know
hot
link
forever.
That
becomes
a
problem.
G
I
don't
think
it
would
be
a
very
popular
solution
either
for
performance
or
just
you
know,
cost
reasons,
but
the
the
actual
big
problem
that
I'd
like
to
see
there
see
us
solve
here
is
for
these
urls.
If,
as
the
cloud
idp,
I
provide
a
url,
you
know
something
dot
jpeg
open
to
the
internet.
How
do
I
make
sure
that
only
the
skim
service
rider
that
I
sent
it
to
is
able
to
access
it?
The
skim
standard
today
doesn't
talk
at
all
about
securing
these.
These
urls,
the.
G
I
believe
the
intention
of
the
original
authors
was
to
leave
it
up
to
the
implementers.
Unfortunately,
that's
led
to
not
you
know,
just
there's
not
a
whole
lot
of
use
profile,
pictures
between
cloud
idps
and
service
providers
today.
E
D
My
understanding
is,
it's
just
a
url,
so
if
you
are
going
outside
the
spec
and
pre-fetching
the
data
as
a
service
provider,
yeah
you're
opening
up
a
can
of
worms,
because
now
you
are
republishing
the
picture.
So
what
what
was
discussed
originally
was
just
publish
the
url
and
it's
it's
the
client
that
receives
it.
That
has
to
have
the
credentials
to
go
and
pull
it,
but
but
the
spec
is
silent
on
that
for
a
reason,
because
there
wasn't
consensus
on
that,
so
it's
just
a
url.
D
G
Oh
yeah
thanks
phil
yeah.
I
I
agree
that
it's
a
can
of
worms
there's
a
lot
of
sort
of
questions
there
about
who
should
be
doing
what.
I
think
that
a
consensus
is
needed,
though,
and
that's
sort
of
what
I'm
calling
for
here,
just
because
looking
at
the
number
of
collaboration
apps
that
exist
out
there
today,
there's
just
in
sort
of
anecdotal
experience
myself.
G
I've
had
a
number
of
conversations
where
folks
that
have
you
know
that
I've
helped
with
integrations
would
really
like
to
consume
profile
pictures,
but
there's
not
a
a
way
that
we
deem
secure
today.
This
is
me
speaking
as
a
product
manager
at
microsoft.
G
There's
a
security
problem
that
we
would
like
to
see
solved
here
this
week.
We
don't
you
know,
feel
that
it,
the
it's
quite
there
yet
and
the
we
need
a
scalable
way
to
to
do
this
versus
leaving.
You
know
certain
decisions
up
to
the
you
know
any
given
set
of
implementers
that
are
working
together.
That's
the
internal
problem,
essentially.
I
Hey
or
steel
from
transmute,
I
guess
I've
seen
versions
of
this
problem
solved
in
a
couple
different
ways.
So
I
just
share
you
know
briefly
the
some
of
some
of
the.
B
B
So
this
is
josh
baum
from.
D
B
I
G
So
yeah
this
one
as
we
even
just
heard
from
the
feedback,
I
think
there's
a
lot
of
potential
solutions
on
how
to
address
this.
I
don't
have
a
great
one
to
propose
myself,
but
it's
a
topic
that
I
would
really
like
to
see.
A
group
of
people
find
a
good
scalable
solution
to
so
I'll.
Try
to
continue
this.
This
topic
on
the
mailing
list,
but
I
would
really
love
to
get
a
draft
written
at
some
point
for
this
next
slide.
Please.
A
Okay,
just
so
it's
on
the
notes.
Phil
commented
on
the
chat
that
it
could
fit
in
the
best
practices.
G
G
Okay,
so
yeah.
This
is
the
last
in
the
pile
of
assorted
topics
and
drafts.
So
I
I
believe,
there's
interest
in
sort
of
tightening
up
some
aspects
of
the
sim
standard.
With
regards
to
security,
you
know
the
scam
was
written
in
2014
2015
that
was
published
in
2015.,
the
internet
and
the
world
have
changed
since
then.
So,
just
some
examples
of
things
that
could
be
profiled
and
you
know
strongly
discouraged
in
a
modern
security
profile.
G
Bcp
would
be
to
drop
support
for
basic
auth
when
authorizing
the
skim
service
writers
to
drop
the
password
attribute
off
of
the
user
resource
and
to
sort
of
clarify
on
this
one,
I'm
not
necessarily
proposing
a
hard.
You
may
never
do
this
ever.
There
are
always
going
to
be
sort
of
the
outliers
in
the
edge
cases,
for
instance
skim
talking
to
some
gateway.
That's
talking
to
a
really
really
old,
like
you
know,
mainframe,
or
something
where
the
code
on
it
is
never
going
to
change.
G
But
for
one
of
the
really
common
scam
use
cases
of
you
know
cross
domain.
You
know,
identity
exchange,
you
know
sort
of
the
the
cloud
flavor
of
it.
Passwords
really
shouldn't
be
going
around.
Was
you
know?
Federation
has
taken
its
place
as,
as
the
you
know,
the
way
to
go,
and
even
just
you
know
it's
looking
at
the
basic
off
piece.
There's
there's
bear
tokens,
there's
oauth,
there's
other
ways
to
purchase
over.
G
I
guess
clarify
basic
auth,
think
like
username
and
password
there's
a
if
you
can't
tell
by
my
lack
of
you
know
precision
on
these
terms.
I
don't
really
live
in
the
auth
world,
so
I'm
adding
to
yours's
comment
a
minute
ago.
The
thing
on
sort
of
reference
url
security
may
also
fit
into
this.
G
D
I
think
I've
got
it.
Okay,
yeah
we
talked
about
this
before
and
it
may
already
be
in
the
security
condition,
security
considerations
and
I'll
double
check
that,
but
there
there
are
really
two
scenarios.
One
is
authenticating,
so
you
can
make
calls
to
skim
as
a
client
which
is
which
is
using
an
http
authorization
header
and
it's
not
a
great
practice.
D
G
J
D
H
If
I'm
understanding
this
correctly,
these
are
two
completely
unrelated
example.
Suggestions
that
danny
is
presenting
one
is
about
the
client
authenticating
itself
to
the
service
divider
and
the
other
is
about
just
data
moving
around.
So
I
don't
think
these
were
linked
together
as
part
of
the
same
part
of
the
same
issue
and
again
I
don't
think
it's
a
necessarily
the
best
time
to
debate
these
particular
issues
right
now.
This
is
more
just.
D
G
D
Yeah,
I
was
just
saying
that
that
I
I
just
wanted
to
point
out
that
if
the
spec
is
very
clear
right
now,
although
it
does
not
say
must
not,
it
says
should
not,
and
it's
not
even
capitalized,
it
should
be
avoided.
If,
if
people
feel
very
strongly,
I
wouldn't
be
opposed
to
banning
it,
but
I
think
there's
a
there's
a
it's
a
much
more
difficult
question
on
the
second
issue
of
you
get
rid
of
password
entirely
and
I
don't
think
the
community
is
ready
for
that.
D
There's
still
ldap
out
there
for
god's
sake,
so
they
haven't
got
rid
of
that,
and
so
I
think
that's
sort
of
a
key
part
and
and
I'm
hearing
a
lot
of
people
really
want
the
opposite.
They
want
credential
schema
to
have
more
sophistication
about
credentials
and
they
actually
want
not
just
the
password,
but
they
want
all
the
password
management
features
standardized.
So
password
policy.
When
was
the
last
change?
G
Yeah-
and
that
was
all
I
had
so
I
believe
this
is
the
last
slide
of
my
set
and
we
can
move
on.
A
D
That's
probably
30
seconds
the
working
group
had
a
call
for
a
job
for
adoption
for
the
events
draft
that
was
passed.
I
asked
for
prior
to
posting
it
as
for
co-authors,
and
thank
you
nancy
for
agreeing
to
to
help
me
with
the
draft.
We
could
probably
still
use
a
couple
more
co-authors.
D
What
that
usually
means
is
that
I'll
be
talking
with
the
authors
more
directly
at
each
publication
cycle
and
hopefully
getting
help
in
writing
or
in
editing
and
so
on,
so
on
and
so
forth.
So
if
anybody
wants
to
co-author,
please
let
me
know.
I
just
need
a
clear
indication
that
you
want
to
do
that.
I
know
there
were
a
couple
other
people,
but
I
wasn't
able
to
a
time
of
publication,
get
a
confirmation
that
they
wanted
to
be
an
actual
author.
A
D
A
Will
clarify,
I
will
speak
now
as
a
participant
since
I'm
now
an
author
for
this
draft.
I
will
not
be
speaking
to
this
draft.
As
a
chair,
I
will
relinquish
my
role
as
chair
to
aaron,
for
this
particular.
A
B
This
is
phil,
I'm
interested
in
helping
out,
so
you
hook
me
into
whatever
process
you
have
in
terms
of
respect.
A
D
And
as
for
the
draft,
there
was
very
minor
changes,
a
couple
typos
and
other
things.
I
think
we
have
to
start
the
discussion
phase
next
before
you
see
any
major
updates.
A
D
Thanks
what
I
thought
I
saw
at
least
in
the
last
month,
a
lot
of
duplication
of
efforts
going
on
or
parallel
streams
that
were
starting
to
appear,
and
I
had
was
under
the
impression
that
we
had
made
a
decision,
but
it
seems
like
we
need
to
revisit
that.
D
So
I
took
the
liberty
of
writing
what
I
saw
as
the
use
cases
for
for
events
and
for
paging
and
comparing
the
things
I
also
heard,
even
today,
things
like
change
detection
that
could
be
covered
for
some
people
by
the
skim
events
draft,
but
you
may
still
need
change
detection
as
a
polling
technique.
So
that's
something
we
we
should
sort
out,
so
I
thought
the
best
way
to
get
through.
This
would
be
to
talk
about
the
use
cases.
What
are
we
trying
to
solve
and
then
we
can
sort
of
say?
D
Okay,
what
approach
really
works?
I
also
want
to
understand
the
full
set
of
requirements
and
also,
I
think,
one
of
the
things
that's
been
pointed
out-
that
the
environment,
just
the
world
of
directory
services
dramatically
different.
So
we
now
have
broader
security
threats.
There's
signaling
going
on
that.
We
need
to
do.
D
D
There
may
be
multiple
clouds
involved.
So
for
now
I'm
just
going
to
talk
about
two
at
a
time
and
it
doesn't
really
matter
what
they
are.
But
the
point
is
there
are
two
separate
domains:
there
can
be
life
cycle
relationships
between
users
in
one
domain
and
another,
such
as
an
employee
when
the
employee
leaves
the
employer,
the
account,
let's
say
at
salesforce.com
needs
to
be
either
deleted
or
suspended
or
soft
deleted.
So
that's
a
big
question
right
there.
I
just
gave
you
three
possible
outcomes
at
at
salesforce.
D
That
might
happen
in
reaction
to
a
change
in
status
and
a
parent
domain,
and
that's
one
of
the
things
that's
important
to
observe
is
that
it's
sometimes
it's
very
complex
now
to
go
the
old-fashioned
way
of
ldap
but
saying
one
domain
controls
the
other,
because
what
we
have
is
a
concept
of
independent
control
and
slightly
independent
life
cycle
management.
So,
while
there's
a
relationship,
a
trigger
that
goes
between
the
two
independent
action
is
now
more
important
than
ever
so
we'll
go
through
this.
Let's
go
to
the
next
slide.
D
So
in
cursor
paging,
the
idea
is
that
the
domain
on
the
left
is
periodically
asking
for
a
logical
copy
of
the
entire
database
on
the
right
so
that
it
can
do
reconciliation.
Now
it
may
do
that
in
one
call,
or
it
may
actually
use
paging,
which
is
what's
being
asked
for
so
it
can
get
through
the
result
set.
D
What
is
good
about
this
is
that
the
reconciliation
process
is
deciding
what
changes
in
domain
b
mean
to
domain
a
and
then
deciding
what
to
do
in
domain.
A
on
the
sort
of
circle
arrow
on
the
left
and
right
side,
I'm
just
indicating
that
those
domains
are
are
running
independently.
They
have
their
own
value,
add
and
there
are
changes
that
occur
independent
in
each
other,
that's
very
different
from
the
old
world
where
we
had.
D
D
D
In
the
event
based
system,
it's
really
the
same
things:
the
same
environment
and
conditions,
except
this
time
when
a
server
in
domain
b
processes
a
change,
it
issues
a
security
event
token
and
which
is
actually
just
a
jot
and
sends
it
through
a
transport
mechanism
which
could
be
a
message
bus
where
it
could
be,
it
could
be
the
set
transfer
protocol.
D
D
Two
things
that
are
that
are
majorly
different
one
is
the
event
can
be
transferred
in
real
time
as
soon
as
it
occurs
on
any
particular
of,
say,
100
servers
in
domain
b
that
server
can
publish
the
event
directly
or
it
may
route
that
event
to
a
dispatch
server
which
collects
all
the
events
as
a
stream
and
then
sends
it
to
its
partner.
That's
really
up
to
the
deployer
to
determine
that,
but
the
idea
is
as
close
to
real
time
as
possible.
You
get
change
notices
coming
to
the
other
side.
D
Those
change
notices
can
be
notices
about
the
skim
resource
itself.
This
piece
of
schema
this
attribute
changed
and
so
forth,
or
they
can
be
things
like
security
event.
They
could
be
account
status,
change
events,
they
could
be
risk
events
that
says,
there's
been
a
password
reset
against
this
user,
which
which
then
the
receiving
side
can
decide
what
to
do,
and
they
might
not
go
to
the
skim,
server
or
the
directory
on
their
side.
They
may
actually
send
it
off
to
their
security
team
for
action
there
and
they
there's
some
other
proprietary
action.
D
So
these
are
the
cases
I've
covered.
There
may
be
more,
but
let's
go
through
them
next
slide.
D
D
It's
partially
addressed
in
the
current
spec
with
the
bulk
operation,
but
in
reality
people
are
probably
doing
exports
to
jason
transporting
that
file
securely
to
the
to
the
receiver
side,
and
then
they
do
an
import.
When
that
happens,
we
can
talk
more
about
this
issue,
but
I
think
it's
a
it's
a
special
case
problem
that,
depending
on
what
you
have
may
be
solved
in
different
ways.
D
So,
for
example,
if
all
of
your
nodes
in
your
skim
service
provider
are
communicating
and
replicating
using
a
message
bus
these
days
often
the
message:
bus
technology
itself
is
your
recovery
mechanism.
So
if
you
lose
a
node,
you
simply
go
to
the
message:
bus
and
reload
the
whole
message,
bus
and
you're
up
and
running.
That's
one
of
the
techniques.
There
are
many
other
techniques,
but
that's
sort
of
an
issue
for
product
managers.
I'm
not
sure
it's
an
issue
for
us
to
have
to
deal
with
when
we're
talking
about
that
kind
of
recovery.
D
D
So
in
this
side
case
we
have,
we
have
something
goes
on
in
the
controller
side
and
it
wants
to
reconcile
a
change,
that's
concerned
on
a
slave
side
and
reconcile
that
change.
So
it
wants
to
pull
the
change
information
across
and
decide
what
to
do
locally
to
keep
in
sync,
or
vice
versa.
D
So
this
is
another
case.
I
think
this
is
important,
because,
where
we're
in
the
past
with
ldap,
we
didn't
standardize
replication,
because
typically
customers
would
be
deploying
one
server
across
all
of
their
one
vendor
one
product
across
all,
so
what
each
product
chose
to
do,
didn't
really
matter
or
impact
replication.
D
Another
thing
that
that
typifies
this
environment
is
that
you're
all
in
the
same
security
domain
schema
is
likely
to
be
the
same,
and
your
goal
is
to
copy
information
between
nodes
in
its
whole
form.
So
in
this
case,
when
you
send
a
change,
you
want
to
send
all
of
the
information
all
at
once
when
I'm
cross
domain.
D
I
know
that
the
receiving
domain
might
be
interested
in
the
change,
but
I
don't
know
necessarily
what
information
they're
interested
in,
so
that
the
concern
with
cross-domain
coordination
is
don't
just
send
them
all.
The
changes
that
occurred
just
send
them
what
they
need
to
know
in
order
to
resolve
the
issue
in
internal
domain
will
publish
all
the
replication
information
so
that
the
other
node
can
be
quickly
made
current.
That's
really
the
difference.
I
see
between
the
two
cases
next
slide.
D
D
So
that's
just
something
in
the
back
of
my
mind
that
those
two
diagrams
for
both
paging
and
both
event,
delivery.
You
might
want
to
start
thinking
about
throwing
away
master
slave
and
start
thinking
about
that.
It's
bidirectional
you
may
have
to
set
policy.
That
says
one
domain
is
authoritative
over
rolls,
but
the
other
domain
is
authoritative
over
photos
who
knows,
but
it's
a
little
bit
more
complex
than
we
had
even
six
years
ago.
Thanks
next
slide.
D
This
sort
of
flows
out
of
the
risk
sharing
events
under
open
id
people
are
interested
to
know
about
higher
level.
Events
that
come
out
of
a
skim
repository
such
as
a
password
was
changed
or
an
account
password
was
reset
or
suspicious
activity.
There's
a
bunch
of
high
level
events
that
may
be
detectable
or
maybe
being
tracked
in
a
skin
server.
D
We
don't
have
necessarily
directly
all
of
the
all
of
the
data
involved
in
this
because,
as
I
said,
we
haven't
standardized
things
like
password
failure,
accounts
and
things
like
that,
but
certainly
the
side
that
wants
to
share
that
information
has
built
that
information
with
a
combination
of
skim
and
other
services,
and
they
have
that
data
and
they
want
to
be
able
to
share
that
so
that
receiving
clients
can
know
that
they
can
take
independent
action.
D
So
the
other
thing,
so
the
scenario
that
risk
sort
of
worries
about
is,
if
somebody
adds
a
new
authentication
factor,
they
want
to
make
sure
that
the
person
adding
the
authentication
factor
wasn't
somebody
stealing
the
account,
and
so
they
changed
their
security
policies
temporarily
on
the
receiving
side
to
enable
account
recovery
for
a
period
of
time
and
then
once
the
factor
is
laid
down
and
it's
working.
D
They
now
know
that
the
account
is
good
to
go,
sometimes
they're.
Just
taking
that
information
and
saying
we're
going
to
reset
our
login
sessions
and
force
that
user
to
log
in
so
that's
what
signaling
is
all
about.
Is
the
ability
to
take
independent
action
by
the
receiver
and
decide
what's
appropriate,
based
on
your
own
local
policies.
D
I
wish
now
I'm
looking
at
this
slide
on
my
desktop.
It's
quite
small.
I
tried
to
go
through
a
comparison.
I
I
will
first
of
all
say
I'm
biased,
because
I
started
off
by
looking
at
paging.
D
My
chief
concern
with
paging
is
that
you're
only
doing
it
on
a
certain
frequency,
it's
periodic,
whereas
events
you're
trying
to
get
make
sure
that
it
gets
delivered
as
in
close
to
real
time
as
you
can
and
the
effort
you
put
into
making
sure
that
happen
will
get
you
closer
to
real
time.
If
you
decide
you're
not
going
to
do
that,
you
can
still
throttle
event
delivery
through
whatever
mechanism
you
want
to
use
and
say
no,
I'm
still
I'm
just
going
to
pull
for
events
every
15
minutes.
D
You
can
go
both
ways
with
that,
but
on
the
left,
my
concern
has
always
been
downloading.
The
entire
data
set,
particularly
if
you're
talking
about
billions
of
users
is,
is
a
a
real
challenge
in
terms
of
raw
cost
data
exposure
you're,
exposing
all
of
your
data
every
synchronization
cycle,
I
would
prefer
to
see
a
way
in
in
the
skim,
coordinated
event.
Spec.
D
In
fact,
the
changes.
Don't
initially
at
the
event
stream
doesn't
contain
raw
data,
it
just
contains
a
notice
that
an
identifier
resource
has
changed
and
then
the
client
can
go
and
find
out
what
that
change
was
and
that's
how
it
works.
So
we
we
share
minimal
information
in
that
profile
and
I
think
that's
why
it's
a
better
draft.
I
think
also
that.
D
The
ability
to
leverage
that
event
mechanism
to
do
things
like
async
sing,
async
signaling,
so
the
bulk
request
has
completed
to
send
security
events
such
as
risk
signals
and
other
events
creates
a
lot
of
value
and
it
sort
of
fits
in
with
a
general
pattern
between
open
id
oauth
and
skim
as
a
three-legged
security
system
that
can
be
coordinated,
and
so
that's
where
I'm
coming
from
so
that's
it
and-
and
I
also
would
like
to
invite
danny-
and
I
wish
matt
was
here
to
also
add
where
I've
missed
the
benefits
of
theirs.
D
I
know
of
a
lot
of
databases
that
can't
support
cursor
based
paging
or
if
they
do,
you
get
problems
like
thrashing,
because
what
they
end
up
doing
is
maintaining
a
copy
of
the
entire
database
in
memory
and
you
that
leads
to
swapping
and
other
things
and
if
you're,
a
service
provider
with
many
tenancies
on
the
same
server,
you
haven't
got
a
lot
of
memory
or
a
lot
of
disk
to
hold
multiple
copies.
If,
if
you've
got
50
clients
doing
paging
at
the
same
time,
you
won't
have
enough
memory
at
all.
D
I
still
think
you
could
avoid
paging
by
doing
a
get
as
long
as
the
service
provider
agrees
to
give
you
unlimited
search
results,
you
could
still
do
a
cyclical
get
and
that
can
work,
and
that
is
really
simple,
but
it
still
has
all
the
problems
of
you
can
only
afford
to
do
that.
Only
every
so
often
once
a
day
or
once
an
hour,
and
that
leaves
you
a
one-hour
gap,
and
is
that
good
enough?
G
Hey
danny
with
microsoft,.
D
G
I
did
have
a
chance
to
speak
with
matt
peterson
before
this
he's,
unfortunately
not
able
to
join
us
today.
G
So
the
one
of
the
problems-
and
it
comes
down
to
either
we
would
have
to
you-
know
specify
it
in
the
in
the
events
draft
or
it's
left
up
to
the
implementers
is
that
in
some
implementations
of,
like
shared
signals
transmission
once
the
signal
has
been
provided
and
the
the
receiver-
and
I
apologize
I'm
using
the
wrong
terminology
once
the
c
receiver
is
responded
with
200.
Okay,
the
transmitter
may
not
have
a
that
obligation
to
actually
hold
that
message
anymore.
G
One
of
the
benefits
of
first-year-based
pagination
is
that
the
for
a
limited
period
of
time
that
same
cursor
can
potentially
be
replayed
to
in
an
event
that
there's
some
sort
of
you
know:
infrastructure
problem,
the
vm
hosting
this
has
gone
down
you're
able
to
get
your
data
back.
I
it's
solvable
on
the
shared
signal
side
as
well
other
things
sort
of
in
favor
of
cursor-based
pagination.
G
G
It
is
still
going
to
be
the
more
simple
option
compared
to
having
to
set
up
whichever
elements
are
needed:
infrastructure
wise
for
the
the
shared
signals
processing
and
as
a
counterpoint
to
sort
of
the
the
the
downfalls
from
like
a
software
engineering
standpoint
on
cursor
based
pagination.
I
I
so
matt's
feedback.
To
paraphrase
him
was
that
there
are
also
a
number
of
existing
databases.
G
You
know
used
in
identity
systems
where
they
natively
do
cursor-based,
pagination
and
even
today,
to
do
index-based,
pagination
they're
having
to
store
their
results
in
memory
in
an
index
format,
because
that's
not
natively
how
they
work
with.
Today,
I
feel,
like
I
had
other
points,
I'm
losing
my
trying
to
thought
slightly,
not.
D
Sure
why
don't
I?
Why
don't
I
respond
to
the
first
couple
you
made
and
then
you
can
add
yours,
because
I
don't
want
to
lose
the
thought
on
the
set
transfer.
We
did
a
lot
of
in
the.
In
the
security
token,
the
id
token
working
group
which,
which
sort
of
took
the
skim
group
and
a
number
of
other
groups
to
work
together
on
a
common
spec.
We
talked
about
this
and
the
issue
from
any
service
provider
was
the
sheer
number
of
events
that
are
flowing
out
and
being
able
to
persist.
D
Those
for
long
periods
of
time
becomes
untenable
or,
let's
just
say,
we
didn't
get
consensus
on
that.
What
the
group
decided
was
is
that
the
transfer
spec
give
you
guaranteed
transfer,
because
it's
not
just
a
200..
The
client
responds
it
acknowledges
that
an
event
was
received.
The
responsibility
for
recovery
then
becomes
the
receiver's
responsibility.
So
once
that
client
receiver
says
yes,
I
got
that
notice,
I'm
acknowledging
it
that
tells
the
service
provider.
It's
now
allowed
to
forget
about
the
event.
So
that's
the
way
it
normally
works
in
in
practice.
D
There's
nothing
saying
that
the
publisher
of
the
event
can't
keep
the
event
indefinitely.
There's
nothing
saying
that
or
not
what
what
was
wanted
was
the
ability
for
the
publisher
to
only
have
to
hold
events
for
two
or
three
days.
The
idea
would
be
you're
sinking,
let's
say
with
azure
between
google
and
azure
and
azure
goes
offline
or
google
goes
offline
for
three
days.
D
D
But
if
we're
talking
about
recovery
of
a
lost
server,
that
needs
to
go
back
and
figure
out
a
month's
worth
of
history.
Since
the
last
backup
that's
a
different
thing,
and
it
would
be
up
to
the
receiving
domain
to
figure
out
how
they're
going
to
manage
that
recovery
issue
anyway,
so
again,
making
recovery
the
client's
responsibility,
gives
the
client
domain
full
control
over
their
data
set
and
how
they
do
recovery
and
it
lets
the
side.
D
A
A
Well
so
I
just
wanted
to
to
say
we're
transitioning
and
we
are
behind
scheduled,
but
the
whole
notion
why
I
put
this
presentation
ahead.
Danny
of
this
was
to
trigger
the
discussion
of
the
use
cases
as
well
as
the
requirements
as
we
see
them.
That
leads
us
to
the
notion
of
we've
already
adopted
the
skim
events.
A
Draft
there's
been
discussion
about
adoption
of
the
other
two
drafts
or
potentially
right,
and
so
this
is
where
we
allocated
40
minutes
we're
now
down
to
20
some
odd
million
minutes
right
to
get
into
that
discussion
of
alignment
to
drive
consensus.
Sorry
amongst
the
participants
here
of
at
least
getting
alignment
and
agreement
of
these
are
the
use
cases
and
and
that
are
driving
the
requirements
that
we
need
to
address
in
the
working
group.
A
So
I
think,
with
that
I
mean
I
I'm
actually
fine,
because
you've
started
talking
about
the
pagination
right
but
danny
I
had
allotted
for
you
and
janelle
to
help
lead
that
discussion.
It's
too
bad
matt's,
not
here,
so
it's
not
just
you
and
phil,
but
others
p.
Please
feel
free
to
to
jump
in
as
well.
G
Yeah,
I
I
agree
I
wish
matt
was
here
because
he
can
speak
to
this
better
than
I
can
so
yeah.
I
guess
the
so
the
use
case
behind
pagination
and
just
like
full
transparency.
I
I
work
with
skim
almost
exclusively
in
the
scenario
of
we'll
call
it
a
centralized
client
working
with
a
service
writer
or
a
set
of
service
providers,
and
in
that
case,
usually,
the
client
is
acting
as
an
authoritative
source
trying
to
push
data
elsewhere
in
those
scenarios,
I
think
pagination
is
needed.
G
For
instance,
I
guess
so
it's
not
just
push.
It's
also
pool
data
potentially
where
you
know
data's
going
from
somewhere
else
in,
but
in
either
case
in
instances
when
you
want
to
not
only
sort
of
you
know
make
a
suggestion
on
the
state
of
things.
You
know
you're
sending
the
data
that
you
have
and
there
may
be
other
data
in
a
system
in
when
you
as
a
client
need
to
sort
of
be
able
to
see
the
full
state
of
the
external
connected
skim
system.
G
Precisely
the
problem,
you
said
of
you
know
millions
of
results.
I
don't
think
cursor-based
pagination
alone
solves
that
it's
probably
cursor-based.
Pagination
with
some
sort
of
delta
query
together,
allowing
to
paginate
a
set
of
results,
and
even
if
you
have
five
million
results,
if
you're
able
to
first
say
give
me
the
results
that
have
changed
since
the
last
query
and
then
break
them
up
into
smaller
chunks,
it's
sort
of
like
it
correctly
wrong.
Isn't
that
how
old
that
lets?
You
do
it
like
it's!
A
Yeah,
I
I
don't
know
if
there's
a
an
ldap
user
here
that
can
speak
authoritatively.
D
Well,
my
my
concept
was
the
thrill
here
again:
the
cn
equals
change.
Log
thing
in
ldap
was
rather,
in
my
experience,
a
description
of
how
not
to
do
things
because
it
was
a
common
changelog.
You
you,
it
was
very
hard
to
implement
good
security
on
that,
because
it's
wide
open
and
if
you
have
multi-tenancies
it
just
gets
it
becomes
very,
very
complex
to
secure,
and
it
also
has
a
lot
of
high
value
data
in
it.
D
G
Yeah-
and
so
I
can't
speak
authoritatively
just
because
I'm
starting
to
get
outside
of
my
expertise,
but
I'm
my
understanding
is
that
there
are
a
number
of
other
rest
apis,
whether
for
identity
or
something
else
where
a
delta
query
mechanism
does
exist.
Like
sorry,
I
don't
think
we're
treading.
You
know
brand
new
ground
so
much
as
adopting
a
solution.
That's
used
elsewhere,
the
exact
engineering
pitfalls.
I
don't
think
I'm
particularly
qualified
to
discuss,
but
I
so
I
guess
so
to
pivot
slightly.
G
G
They
could
just
use
an
event
flow
to
to
keep
track
of
things,
but
there
are
also
systems
where
that's
not
really
feasible
and
so
the
the
high-level
concept
of
security
event
tokens
containing
skim
information.
I
I'm
aware
of.
G
Like
other,
I,
you
know,
people
conceiving
the
idea
besides
you
as
well
phil,
although
you're
the
only
one
to
actually
publish
a
draft
and
in
the
use
case
that
the
the
other
group
that
I've
worked
with
was
aware
of
our
I
guess
was
focusing
on
was
more
around
a
skim
service
provider,
communicating
high
priority
results
or
high
priority
data.
That
could
not
wait
for
that.
G
Next,
you
know
pulling
cycle
that
you
mentioned
here,
maybe
like
three
hours
away,
so
the
classic
example
from
sort
of
an
identity
like
synchronization
provisioning
standpoint
being
a
client,
normally
pulls
data
from
a
human
resources
provider
and
uses
that,
to
you
know,
do
things
downstream
and
they
do
that
every
three
hours
and
the
the
skim
service
writer,
which
is
the
human
resources
organization.
They
have.
You
know
breaking
news
that
a
certain
employee
has
been
fired
and
needs
to
be
terminated
immediately,
and
you
know
like
whatever
happens
after
somebody's
fired.
G
You
know
what
the
facility
will
call
it
and
so
alerting
those
high
priority
changes
is
the
use
case
that
you
know
sort
of
me
speaking
as
an
implementer
rather
than
an
author
or
anything
look.
First
saw
when
reading
your
draft.
A
So
phil
I
I
was
just
gonna
channel
in
the
in
the
chat
when
ting
mentioned,
I
think
cursor
pagination
and
events
both
have
its
unique
use
cases
in
the
area
of
data.
Sync,
I'm
just
gonna
read
it.
I
felt
we
need
both,
even
though
the
changing
event
published
should
be
the
primary
mechanism
to
sync
the
data
efficiently,
so
that
kind
of
triggered
a
question
in
that
danny.
If
I'm
understanding
correctly
you're
also
describing.
A
A
A
G
Yeah,
I
guess,
since
I'm
standing
up
for
the
mic,
I
will
sit
down
in
a
second,
I
speaking
more
as
an
implementer.
I
so
I
agree
that
I
think
both
are
needed.
I
sort
of
just
described
my
thoughts
on
the
use
case
and
it's
a
bit
more
narrow
than
some
of
the
examples
that
phil
has
given,
whereas
we
we
haven't
really
ever
looked
at
it.
You
know,
speaking
for
microsoft
as
a
sort
of
a
a
wholesale.
You
know,
like
replication
feature
to
move
all
data
around.
G
G
G
Not
quite
so
there
are,
there
are
certain
use
cases
such
as
the
human
resources,
high
priority
change
notification
of
a
termination
where
implementation
makes
sense
at
the
scale
of
sort
of
all,
like
replication
of
changes
from
side
a
to
side
b
happening
through
events
rather
than
through
a
polling
model.
G
My
sort
of
you
know
rough
understanding,
not
a
hard
statement
or
commitment
is
that
as
microsoft
as
an
implementer,
we
would
prefer
to
act
primarily
as
a
client
and
just
pull
for
those
changes
using
cursor-based
pagination,
a
delta
query
and
so
on,
rather
than
also
having
to
have
a
listener
for
the
events
to
come
in.
Okay-
and
I
guess
just
one
final
clarify
so
in
the
in
large
distributed
cloud
systems,
I
think
there's
maybe
a
slightly
higher
risk.
G
You
know
when
you
have
like
a
thousand
or
ten
thousand
or
a
million
or
whatever
little
containers
running
they're.
One
of
them
may
go
pop
off
and
die
and
get
reprovision.
So
in
any
scenario,
where
you're
receiving
something
like
that,
the
the
polling
model
is
a
little
safer.
I
think
in
when
the
client
is
a
cloud
distributed
system
because
you
can
go
and
remake
that
same
request.
If
something
happens
to
one
of
your
many
little
nodes
that
are
running
and
I'm
not
a
software
engineer.
So
if
I'm
wrong,
I'm
sorry,
I'm
saying.
D
Yeah,
I
I
wanted
to
point
out,
because
people
were
looking
for
a
compliment
to
skim
events.
There's
actually
two
other
mechanisms
we
haven't
discussed
as
well,
that
can
be
used.
One
is
rfc
7232,
which
people
haven't
really
looked
into
because
skim's
an
http
profile,
and
it
does
mention
e-tags
and
what
e-tags
give
you
is
the
ability
to
put
http
preconditions
on
your
re
on
your
request,
so
that
means
you
can
say,
get
this
resource
as
long
as
it's
changed
since
I
last
queried
it.
D
If
the
e
tag
has
changed,
the
e
etag
is
just
a
hash
of
the
resource.
So
if
the
resource
has
changed,
give
me
the
resource,
so
that's
what
happens
on
a
get
and
then
on
the
modify.
You
can
put
a
precondition
that
says
this.
This
condition
only
applies
if
the
resource
hasn't
changed.
Underneath
me,
so
that's
one
of
the
techniques
that
http
offers
and
the
skin
protocol
spec
does
specify
that
support
for
etag
the
other
mechanism
that
skim
offers.
D
If
you
want
to
know
what's
changed,
is
you
can
do
a
general
query
and
query
metadata
last
modified
and
say
since
a
certain
date?
That
might
be
somewhat
more
crude
than
people
want,
but
those
two
things
are
sort
of
there
and
that's
what's
in
the
back
of
my
mind
as
the
complement
to
the
event
spec.
D
If
I
need
to
get
a
list
of
identifiers
that
have
changed
since
its
last
date,
I
could
do
a
skim,
get
ask
for
attribute
id
and
last
modified
equals
metadot
and
that
date,
so
so
that's
already
possible
in
the
current
spec.
We
don't
need
a
new
spec
for
that.
G
Yeah,
I
I
think,
there's
hesitance
to
lean
on
the
meta
dot.
Last
modified
thing.
I
mentioned
it.
You
know
45
minutes
ago,
probably
because
in
distributed
systems,
there's
way
more
like
times
to
be
kept
track
of,
potentially
just
it's
harder
to
get
more
like
to
get
a
certain
level
of
precision
and
results,
if
there's
time
drift
between
different
systems
and
then
with
e-tags,
my
understanding
of
it.
G
So
when
you
mentioned
on
the
mailing
list
somewhere
in
the
past
month,
I
went
and
did
a
bunch
of
looking
because
I've
never
had
to
interact
with
them.
That's
a
resource
level
hash,
so
it
doesn't
necessarily
help
with
the
the
delta
query
on
a
large
scale
of
you
know.
If
I
have
a
human
resources
platform
with
500
000
resources
in
it
that
I'm
trying
to
create
like
I,
I
can't
go
and
say
give
me
anything:
that's
changed
in
you
know
the
past,
since
I
last
talked
to
you
three
hours
ago.
G
D
I'll
it
fell
I'll
just
respond.
I
don't
know
that
precision
matters
on
last
modified.
You
could
certainly
add
a
few
minutes
to
that
date
and
you'll
get
more
data
than
you
necessarily
want,
but
I
don't
see
you
losing
data,
it's
certainly
better
than
querying
everything.
D
F
Okay,
go
okay,
pam
dingle!
Sorry,
I
forgot
to
say
that
before
from
microsoft,
so
what
I've?
I
think
we
can
debate
till
we
run
out
of
breath,
but
we
really
need
to
get
this
data
in
the
hand
of
the
engineers
of
the
that
are
going
to
implement.
F
So
can
I
suggest,
as
a
next
step,
we
create
an
analysis,
and
maybe
it's
this
comparison
chart,
but
with
all
of
the
options
instead
of
just
two
of
the
options,
and
we
actually
create
a
survey
that
we
can
send
to
as
many
engineers
as
we
can
that
we
know
have
implemented
skim
2.0
and
try
to
get
those.
You
know
get
answers
back
in
a
format.
Then
we
could
collate
to
understand
what
it
means.
A
It's
possible,
it's
it's
the
notion
of
posting
in
the
mail
list
and
then
for
those
who
are
participating
in
the
skim
to
solicit
that
feedback
from
their
implementers.
I
call
them
deployment
or
slash
customers,
but
yeah.
D
D
All
the
research
I
have
on
databases
say
that
cursor
cursors
are
for
subsets
of
data,
not
whole
sets
of
data,
because
you
either
end
up
locking
all
the
rows
in
your
database.
If
you've
got
a
billion
entries
in
your
database
and
you're,
trying
to
lock
all
the
rules
likely
won't
happen,
and
then
the
only
other
way
is
what
some
servers
do.
D
That
was
a
problem
in
ldap
that
I'd
like
to
keep
out
of
skin.
So
all
those
caveats
I
think,
if
we
choose
multiple
methods,
there
has
to
be
a
strong
reason
for
paging
to
exist
outside
of
coordinated
replication,
otherwise,
you're
just
going
to
have
half
the
community
just
one
spec
half
the
community
does
another
and
you
won't
have
interoperability
and
you'll
end
up
either
implementing
both
and
be
frustrated
by
that.
So
I'd
rather
have
one
spec.
A
G
Hi
danny's
owner
microsoft,
again,
oh
boy,
what
was
I
gonna
say?
Okay,
so
the
as
I
said
before,
the
the
use
case
that
I
see
and
like
the
reason
why
I
think
shared
or
the
the
the
set
skim
thing
would
still
be
adopted
by
some
people,
is
that
very
specific
problem
of
a
service
provider
needing
to
urgently
communicate
something
back
to
the
client.
G
I
can.
I
can
definitely
see
your
concern
of
if
we
go
ahead
with
cursor-based
pagination
that-
and
you
know
we'll
say
the
combination
of
cursor-based,
pagination
and
delta
query,
because
they
sort
of
jointly
solve
a
lot
of
the
like
large-scale
problems
that
people
would
not
necessarily
implement
shared
signals
to
do
that,
instead,
so
not
being
able
to
speak,
you
know
other
than
channeling
matt
peterson's
words,
for
instance
on
the
ease
of
implementation
of
one
versus
the
other.
G
You
know
like
the
database
and
blocking
memory,
and
all
that
I,
I
suspect
that
the
majority
probably
would
go
with
the
cursor-based,
pagination
and
delta
query
unless
they
were
operating
more
as
a
source
rather
than
a
recipient
of
data,
it's
still
in
a
pool
model.
G
So
you
know,
hr
provider
problem
needs
to
tell
somebody
that
a
user
was
terminated
like
I,
I've
always
like
I've
had
trouble
following
in
the
email
threads
at
first,
just
because
the
I
think
the
use
cases
that
I've
seen
you
envision
phil
for
share
it
for
the
shared
signals
have
been
much
wider
than
we
originally.
You
know,
speaking
for
on
behalf
of
my
colleagues
at
microsoft,
that
we
had
envisioned
ourselves.
A
Oh
now
we
can,
we
can
hear
you,
but
if
you
can
speak
a
little
louder
would
be
great.
E
Okay,
so
basically,
we
are
doing
a
lot
of
integrations
with
certainly
the
turkey
sources
and
without
other
hr
systems.
E
I
think
our
experience,
at
least
for
my
visibility,
is
that
we
integrate
basically
start
large
system,
a
small
system
as
well,
so
my
my
feeling
is
for
one,
probably
all
the
included
usually
start
with
a
search
for
platinum
means.
We
call
the
plague
nation,
because
that
has
other
use
cases
like
certain
use
cases.
E
Usually
they
start
with
that
with
that
and
just
publish
that
api
rather
than
in
systems,
and
we
start
from
the
usually
exclusive
interfaces
that
we
integrate
with
them,
but
later
on,
if
they
do
see,
use
cases
for
the
event
when
they're
running
stability
issues,
they
start
to
publish
those
data
changes.
That
sounds
like
that.
E
You
like
for
some
real-time
notifications
through
your
channel
right,
so
so
I
I
feel
that
I'm
not
sure
as
a
as
a
standard
or
community
do
we
want
to
give
people
choices
because
in
reality,
people
always
start
with
like
a
curse
of
purging
and
the
late
army's
unions.
Right
that
looks
like
a
common
pattern.
I
run
into
what
I
did
with
all
the
integrations
of
like
my
country.
Theology
integrations.
That's
just
that's
a
common
reality,
I'm
sure
the
community
as
a
standard
body.
E
Do
we
want
to
force
a
particular
solution
more
geared
towards
the
large
systems,
or
do
we
want
to
give
people
choices?
That's
one.
The
second
question
is
that,
like,
like,
I
said
they
usually
start
with
with
a
search-based
or
cursed
based
approach
and
later
on
their
ad
events,
channels
or
when
they
run
to
escape
issues
or
when
they
really
require
real-time
notifications
in
terms
of
mechanisms
or
events.
E
Usually
there
could
be
like
a
like
a
published
pub
sub
systems
or
they
could
use
like
human
hooks
to
notify
us
on.
You
know
you
demand,
like
hr
systems
to
notify
a
termination
or
something
like
that
using
your
time
with
even
hooks,
so
those
that
clapping,
we
saw
just
share
some
of
the
experiences
we
had
here
so.
A
The
other
suggestion
I
might
make,
and
that
goes
to
phil,
danny
and
and
to
matt
when
he
comes
back,
is
the
comparison
chart
that
phil
started
is
a
really
good
way
for
us
to
provide
that
succinct
data.
If
you
will,
and
so
I'd
encourage
you
to
to
do
an
update
to
it,
based
on
danny
your
yours
and
matt's
feedback
and
alignment
and.
J
A
A
E
And
another
point
I
want
to
mention
is
that
also
we
run
two
cases
where
we
have
to
do
a
full
like
full
import
like
export
import
in
order
to
verify
some
kind
of
fix
some
issues
on
the
data
stream
in
the
young
channels.
We
run
into
some
of
the
issues
where,
in
the
data
that,
like
a
delta
changes
that
we
just
on
the
youtube
channel,
sometimes
we're
going
to
issues
cannot
be
resolved.
We
have
to
resort
to
the
back
to
the
full
import
or
resolve
some
of
the
bugs
or
issues
of
your
channel.
A
G
One
final
thing:
this
is
actually
a
thought
from
matt
peterson
when
we
were
discussing
this
earlier
this
week,
is
that
I
I
think
we
look
at
the
of
the
capabilities
provided
by
the
events
draft
as
an
a
potential
optimization
to
things
like
change,
detection
and
and
whatnot
versus
a
replacement
to
it.
A
Okay,
given
that
we
only
have
three
minutes
left,
I
will
close
this
topic
and
do
a
quick
chairs
update.
A
We
are
behind
on
our
milestones.
The
chairs
do
need
to
update
the
milestones
and
pam.
I
think
when
you
put
the
use
cases
aaron
and
I
can
come
back
and
give
you
a
swag
on
when
we
can
have
you
target
to
a
working
group
last
call
it
may
be
a
living
document
because
of
this
use
cases
and
we're
still
teasing
out
requirements.
A
A
I
see
the
path
of
a
lot
of
drafts,
so
we
can
take
that
up
in
discussion
in
the
mail
list
of
whether
there
is
one
or
several.
That
said,
we
cannot
update
those
milestones
until
we
we
have
that
discussion.
So
for
that
we
need
a
couple
drafts
and
I
believe,
danny
you've
submitted
one
of
them
for
the
roles
and
entitlement.
A
So
if
you
can
just
as
the
author,
you
can
post
on
the
mail
list
and
request
feedback
and
comments
for
the
chairs,
we
look
for
at
least
three
individuals
who
are
not
authors
of
the
draft
to
provide
any
feedback.
Well,
let
me
rephrase
constructive
feedback
either
positive
or
not,
for
whether
this
draft
should
be
considered
to
be
well,
I
believe
the
comments
so
far.
A
The
drafts
are
considered
to
be
in
scope,
but
more
of
whether
it
can
serve
as
a
seed,
a
starting
point
for
us
to
move
forward
and
progress
on
a
particular
topic.
Okay,
so
I
think
with
that
I
may
just
adjourn
us
unless
somebody
really
has
anything
burning,
they
want
to
say
in
the
last
30
seconds,
going
once
going
twice
all
right.
Thank
you
all
for
participating
both
remotely
and
in
person
it's
been
a
good
ietf
114.
A
We
will
try
and
schedule
an
interim
likely
in
september.
I
think,
is
what
we
were
discussing.
So
we'll
put
out
a
doodle
poll
to
pick
a
day
and
then
from
there
post
that
interim,
we
hope
to
see
you
in
ietf
115,
either
remotely
or
in
london.