►
From YouTube: IETF112-SCIM-20211111-1200
Description
SCIM meeting session at IETF112
2021/11/11 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
B
Describe
just
who
says
what
right
so
conversation
style
does
that
work
for
you.
A
Oh,
I
posted
it
on
the
chat.
It
is
also
on
your
meet
echo
view.
A
Thanks,
yes,
so
I
tried
to
preload
the
agenda
there,
okay,
in
the
interest
of
time,
if,
if
I
could
get
a
second
one,
that
would
be
great.
Please
we've
got
a
full
agenda
today,
so
I
want
to
welcome
you
to
our
our
first
official
continuation
of
the
skim
working
group.
A
The
session
is
being
recorded
and
why
is
it
not
advancing
there?
We
go
okay,
so
there
may
be
a
few
of
you
who
this
is
the
first
skim
meeting
so
just
to
provide
you
the
note.
Well,
there
are
some
notes
to
take
here
with
respect
to
privacy.
We
will
be
following
the
iatc
ietf
privacy
processes,
as
well
as
the
ietf
processes.
Sorry,
I'm
not
going
to
read
them
through,
but
there
is
the
note
well
to
be
considered.
A
Okay,
some
meeting
tips.
Please
stay
muted.
It
is
recommended,
although
for
me
the
headset
doesn't
work,
we
usually
track
attendance
through
the
blue
sheet,
but
now
that
we're
meeting
virtually
it's
being
tracked
automatically,
there
are
chat
rooms
and
the
meet
echo.
Some
of
you
are
already
on
there
and
if
you
need
further
assistance,
there
are
a
couple
of
links
there
to
help
you.
A
The
other
thing
that
was
noted
that
we
should
highlight
is
the
code
of
conduct.
So
a
reminder
of
some
of
those
points
here
is
that
as
we
participate,
we
should
extend
respect
and
courtesy.
We
all
bring
in
different
perspectives.
We
are
a
diverse
group,
so
just
be
respectful
of
that.
Keep
our
discussions
professional
and
to
stay
on
point
all
right.
So
with
that
in
mind,
thank
you
pam
again
for
being
our
our
jabber.
We
definitely
could
use
a
second
or
third
one.
A
A
A
There
we
go
hey,
barry,
so
I'll.
I'm
just
gonna
spend
a
few
minutes
to
remind
everyone
of
what
we
said
we
would
do
in
the
charter.
The
milestones
are
there
to
be
as
markers
and
we
can
update
as
we
go
along,
but
getting
to
the
agenda
so
first
step,
we've
got
janelle
and
danny
to
help
provide
an
introduction
of
what
we
did
in
the
first
when
skin
first
chartered
and
why
we
rechartered
meaning
they'll,
go
through
the
current
original
rfcs.
With
highlights
of
the
new
work
that
we
may
want
to
tackle
there.
A
I
don't
see
him
here,
but
he
is
on
the
dock
here
to
present
his
current
draft
on
on
multi-value
filtering
and
then
danny
will
be
talking
about
a
couple
of
of
proposals
that
he
put
together
in
ietf
drafts
that
go
to
the
skip
extensions
and
then,
at
the
end
I
just
wanted
to
cover
our
next
steps
and
the
types
of
tools
that
we
may
want
to
use.
A
I've
got
a
proposal
that
I'll
put
up
a
couple
of
polls
for
how
we
want
to
proceed
and
that
kind
of
brings
us
to
the
end
of
the
session.
So
with
that
in
mind,
any
comments
or
feedback.
A
Going
once
going
twice,
if
not
okay,
I'm
already
one
minute
behind
so
I
know
this
is
an
eye
chart,
but
I
did
put
in
the
link
to
our
tour
actually
to
our
actual
charter,
but
again,
just
to
recap.
What
we
agreed
to
do
was
to
revisit
and
review
and
augment
to
the
use
cases
that
will
help
drive
the
work
that
we
need
to
do:
vis-a-vis,
updating
both
the
schema
and
the
protocol
rfcs
followed
by
some
of
the
other
extensions.
So
again,
I'm
not
going
to
read
through
the
charter.
A
A
I
saw
somebody
come
up
on
video.
So
if
you
want
to
get
on
the
queue
then
I
suspect
I
would
like
for
you
to
come
in
and
sorry
I'm
not
awake
yet
to
put
yourself
on
the
queue
through
the
participants.
You
don't
need
to
turn
your
video
on
okay.
So
with
that
being
said,
what
we
also
provided
was
just
a
stake
in
the
ground
for
the
kinds
of
documents
that
we
may
want
to
do.
A
Besides
the
revisions
to
the
base
schema
and
protocol-
and
we
just
laid
out
some
what
I'm
calling
markers
or
milestones
of
rough
targets,
of
where
we
want
to
be,
that
doesn't
mean
that
this
is
the
final
thing.
It
is
up
to
the
working
group
for
us
to
decide
how
we
want
to
break
the
work
and
move
forward
on
things.
A
A
C
Welcome
everybody
nancy:
are
you
ready
for
for
danny
and
I
to
take
it
away?
Okay,
please
go
welcome
everybody,
I'm
janelle
allen
and
I'd
also
like
to
introduce
you
to
danny
zulner.
We
both
have
something
in
common.
We
are
product
managers
for
for
danny's
at
microsoft
and
I'm
at
cisco
anyway.
Nancy.
Could
you
flip
to
the
next
slide?
Please.
C
We
just
like
to
start
with
what
is
skim.
You
know
this
standard
has
come
about
over
the
last
10
years.
I
think
yeah.
When
did
it
get
released
danny
the
the
rfc?
Do
you
remember,
but.
B
C
Yeah
2015.,
and
so
it's
been
it's
been
around
for
a
while
and
really
what
was
it
designed
to
do.
It
was
designed
for
sharing
identity
data
across
contexts,
and
it
really
was
normalizing.
This
data,
you
know
normalizing,
maybe,
is
a
tricky
word
to
use,
but
it
was
abstracting
away
this
data
from
its
existing,
where
it
was
residing
in
its
data
source
to
be
to
be
shared
between
different.
You
know,
different
organizations,
identity
context.
Domains
is
what
we
call
it
in
the
standard.
C
It
consists
of
a
communication
protocol
and
the
core
schema,
and
this
you
know
it
allows
for
extensibility.
It
wasn't.
You
know
a
net
new
standard.
It
was
building
upon
a
lot
of
work
that
had
come
before
us
in
in
previous
waves
of
deploying
and
handling
identity
data,
and
it
was
designed
to
be
fast
and
cheap
and
easy
to
to
use
nancy
next
slide.
Please
so
again,
why
skim
it?
It
abstracts
away
this
underlying
data
structure.
C
If
the
data
is
stored
in
a
database
such
as
sql
or
in
in
a
directory
such
as
about
app
directory
or
ad
or
elsewhere,
you
know
it
takes
away
the
notion
of
of
that
specific
data
storage
mechanism
and
it
it
abstracts
that
away
and
it
treats
every
site
the
same,
and
that
enables
scale-
and
it's
very
helpful
in
that
way.
C
C
The
format
of
that
data
might
be
stored
with
the
last.
You
know
last
underscore
name
jane
smith,
but
this
example
is
showing
that
there's
a
there's,
a
service,
a
skim
service.
That's
sharing
that
data.
C
You
know
to
an
app
that's
running
in
the
cloud
you
know
somewhere
and
that
that
app
wants
to
store
that
same
user
in
a
different
data
structure
in
the
data
structure
being
suggested
here
as
a
directory
structure,
and
we
can
see
that
the
the
skim
service
transforms
the
data
from
its
original
data
store
into
this
standardized
json
format
that
skim
takes
the
data
through
and
then
the
client
can
decide
how
to
interpret
that
data
and
then
store
it
on
the
other
side.
C
The
skim
protocol
r
is
defined
in
rfc
7644
and
it
had
a
restful
approach
in
design
and
it's
a
set
of
apis.
They
follow.
You
know
they
use
the
http
methods.
You
know
the
verbs
for
get
post
push
patch
and
delete,
which
is
very
convenient
for
identity
data
which
follows.
You
know
the
create,
read,
update
and
delete
methods.
C
So
we
can
map
those
quite
easily
to
these
http
methods,
and
you
know
we
can
standard,
send
the
the
standard,
skim,
json
payloads
to
the
resource
endpoints,
which
can
be
users
groups,
as
defined
in
the
standard
and
other
resources
which
are
definable
by
the
standard.
C
C
C
All
of
the
endpoints
provide
well,
there
are
three
endpoints.
I
should
say
that
provide
discoverability
for
for
understanding
how
to
to
interact
with
the
server
or
the
skim
providers.
D
D
Really,
regardless
of
how
your
you
know
where
the
identity
data
is
stored,
those
endpoints
allow
for
sort
of
the
full
gamut
of
the
rest,
verbs
get
post,
put
patch
and
delete,
and
both,
I
guess,
users
and
groups.
They
come
with
a
core
set
of
attributes,
and
then
the
users
also
have
in
their
schema.
There
is
an
enterprise
schema.
That
is,
you
know
an
extension
of
a
couple
extra
attributes
that
is
included
in
the
the
game.
D
2.0
standard
groups
has
a
simpler
schema
compared
to
users.
The
only
group
specific
attributes
that
aren't
part
of
a
common
pool
would
be
display
name
and
members,
but
part
of
our
charter
is
to
look
into
that
and
make
it
expanded.
D
D
So
you
know
examples
being
maybe
you've
got
some
an
internal
application
and
it's
going
to
update
it's
gonna
allow
the
user
to
update
their
password
using
the
skim
standard,
so
they
can
send
a
post
or
a
patch
or
whatnot
to
well
find
out
a
post,
because
how
are
you
gonna
create
yourself,
but
it
allows
them
to.
You
know
manipulate
themselves
specifically.
D
Next
on
the
list,
we
have
three
that
all
have
that
check
mark
next
to
them,
and
these
are
what
we
refer
to
as
the
discovery
endpoints,
so
slash.
Schemas
allows
you
to
retrieve
available
schemas
and
the
themes
are
just
you
know,
representations
of
a
set
of
available
attributes
for
a
resource
type,
so
the
user's
resource
has
two
schemas
the
core
and
the
enterprise
groups
just
has
core,
and
you
know
it's
very
extensible.
D
Moving
on
to
the
next
one
resource
types,
that's
a
way
to
discover
what
resources
are
available
so
in
in
the
you
know,
the
core
standard
implementation,
resource
types
would
return,
users
and
groups,
and
then
servicewriter
config
allows
you
to
read
configuration
details
specific
to
that
skim,
server
or
service
provider
such
as
you
know
it
is
such
and
such
feature
enabled,
and
if
it
is,
you
know
what
are
the
parameters
of
it.
D
So
there's
a
handful
of
of
like
configuration
options
in
defined
in
the
core
schema,
but
some
of
the
draft
extensions
that
have
been
written
also,
they
all
propose
using
servicewriter
config
to
advertise
whether
or
not
they
are
implemented
on
the
service
provider
and
down
at
the
bottom
list.
We
have
two
that
are,
I
think,
a
little
less
frequently
implemented,
but
I
I've
seen
instances
of
both,
I
believe
bulk,
especially
so
there's
dot,
search,
which
you
would
it's
essentially
an
alternative
to
the
http
getrest
method.
D
It
allows
you
to
define
a
query
and
you
submit
that
career
using
post
in
order
to
serious
set
of
results,
and
then
the
bulk
endpoint
you'll,
submit
your
request.
Using
a
post
and
inside
of
the
body
of
that
request
will
be
one
or
more
separated
sets
of
actions.
So
you'll
include
not
only
the
resource
that
you're
that
you're
hitting,
but
also
the
the
rest
method
and
the
body
of
that
request.
D
So
it
allows
you
to
sort
of
scale
and
hit
efficiency
more
easily
because
you
can
submit
you
know
20
or
100,
or
higher
amounts
of
accommodation
of
get
post
patch
foot
delete.
A
D
And
then,
on
this
page
we
have
a
not
exhaustive,
okay,
so
phil
had
a
comment.
The
post
on
dot
search
is
for
security,
because
a
get
query
can
reveal
pii,
okay,
so
yeah
I
I
did
not
have
the
context
on
that,
so
it
was
very
much
appreciated
and
to
expand
on
that.
We
could
actually
use
what
we
have
right
here
on
on
this
slide
to
go
into
the
details
of
that.
D
So
on
this
slide
we
have
a
set
of
examples,
not
exhaustive,
obviously
of
things
that
can
be
done
with
the
skim
protocol.
So
the
third
example
we'll
just
jump
straight
to
that
in
the
resource
url.
So
https,
you
know
whatever
slash
users
filter
equals,
you
can
see
there
we're
saying
users,
filter
equals,
username
equals
user
domain.com,
and
so
the
dot
search
option
allows
to
perform
a
search
without
having
sort
of
a
plain
text
url
with
pii
in
it.
D
So
now
rewinding
a
little
bit
we'll
just
talk
to
some
of
the
other
ones.
So
a
post
against
slash
users
will
let
you
create
a
new
resource,
and
this
is
you
know
all
the
I'm
using
users
as
the
examples
for
all
of
these,
but
this
scales
out
to
any
data
resource
representing
objects
that
can
be
manipulated.
D
So
if
you
send
a
post
with
a
body
containing
adequate
information
to
slash
users,
slash
groups,
you
know
flash
anything
assuming
it's
been
implemented
in
this
in
the
the
relevant
skim
server,
you
should
be
able
to
create
a
resource.
Get
against
the
the
root
resource.
Like
slash
users
will
return
all
resources,
then
the
third
example
you
can
define
attribute
based
filters.
D
You
know
you
have
a
full
set
of
operators,
and
or
so
you
you
can
only
return
users
where
department
equals
sales,
or
you
know
it
matches,
they
match
a
specific
username,
and
then
we
also
have
another
method
of
retrieving:
a
user
that
doesn't
use
a
filter,
which
is
the
fourth
line
and
I'll
just
be
getting
them
using
their
id
value,
which
is
a
sort
of
a
core
component
of
skim,
and
that
will
return
just
the
resource
matching
that
id
value
and
the
standard
states
that
id
needs
to
be
unique
across
all
resources.
D
Next
two
are
two
different
methods
to
update
objects.
So,
if
you
do
a
put
on
a
on
a
resource,
you
will
define
the
full
set
of
attribute
values
that
that
it
has,
and
if
you
omit
anything,
it
is
expected
that
the
the
attributes
that
have
been
omitted
will
be
set
to
null
or
you
know
blank,
and
then
patch
allows
you
to
do
a
selective
set
of
modifications
without
clobbering
all
the
other
things.
D
So
if
you
send
up
a
patch
for
user
and
only
modify
active
from
true
to
false
you're,
leaving
all
the
other
information
away
are
alone.
Rather,
and
not
modifying
it
another
option
for
updating,
you
can
actually
use
filters
to
select
a
group
of
objects
or
resources
and
update
all
of
them
based.
You
know
anything
that
comes
back
on
the
filter
will
get
the
same
update
applied.
D
D
So
now
to
talk
about
the
schema
a
little
more,
so
the
schema
defines
a
minimal
common
set
of
attributes
that
represent
the
user
in
group
data,
along
with
the
enterprise
extension.
We
talked
about
this,
it's
also
very
extensible,
so
you
can
extend
schemas,
you
can,
you
know,
add
existing
or
you
can
add
attributes
to
existing
resource
types.
You
know
users
groups,
but
you
can
also
extend
resource
types.
D
So
if
the
content,
you
know
if
the
the
concepts
of
users
and
groups
aren't
adequate
for
the
usage
situation,
your
service
provider
can
implement
others
and
they
can
be
advertised
with
that
slash
resource
types
endpoint
that
we
discussed
a
few
slides
ago.
So
it
may
be
slash
contacts
slash,
you
know,
conference
rooms,
but
the
sky's
the
limit
essentially
and
then
the
skin
standard
says
that
custom
schemas
may
be
permanently
registered
with
ayanna
from
our
research.
D
We're
not
actually
sure
if
anybody
has
done
this
yet,
although
there
are
plenty
of
custom
schemas
out
there
so
far,
the
the
only
ones
that
we
were
able
to
notice
there
were
the
ones
directly
assigned
to
the
the
standard
next
slide.
Please.
D
D
So
the
the
skim
schema
rfc,
which
is
7643,
has
two
different
spots
where
it
describes
attributes
the
first
section,
which
is
it's
either
in
the
in
the
section
three
or
four
sorry,
I
can't
remember,
is
more
a
more
descriptive
manner
of
talking
about
attributes
and
that's
what
we
see
on
the
left,
and
so
it
it's
a
short
set
of
you
know
one
or
two
sentences
per
attribute,
sometimes
longer
talking
about
the
properties
that
it
has
and
then
down
in.
D
I
believe
it's
section,
eight
there's
a
full
json
representation
of
the
schema
that's
shown,
and
that
gives
you
all
of
the
the
nitty
gritty
the
mutability
state
is
it
read?
Is
it
read
only
is
it
read
write?
Is
it
multi-valued?
What's
the
data
type,
all
those
things
and
in
the
current
standard,
there's
been
a
couple?
D
There's
been
a
couple
instances
where
some
confusion
gets
drawn
between
the
the
two
sections,
the
like
sort
of
expository
description,
part
versus
the
the
json
representation,
but
and
that's
something
we're
hoping
to
touch
on
moving
forward.
But
on
this
slide
we
can
see.
Name
is
described
as
a
single
value,
complex
attribute,
it's
the
components
of
a
user's
name.
D
D
What
not
boolean
so
name
has
several
sub-attributes
formatted
family
name,
given
name
and
then
you
know,
phone
numbers
is
similar,
except
it's
a
multi-valued
complex
attribute,
and
we
can
tell
that
because
it's
the
the
square
brackets
and
then
it'll
be
one
or
more
sets
of
squiggly
brackets.
Each
set
of
squiggly
brackets
enclosing
a
single
like
complex
result
or
object
of
that
value.
I
guess
a
complex
value
would
be
the
right
word
and
yeah.
D
We
can
see
phone
numbers
in
this
representation
has
value
and
type,
I
believe,
there's
one
or
two
other
sub-attributes
as
well
and
same
thing
for
emails,
it's
multi-valued
complex,
so
it's
an
array
of
one
or
more
complex
values,
typically
with
the
the
type
value
is
used
to
differentiate
between
different
purposes
for
email,
because
if
you
have
more
than
one
email,
it
may
be
that
you
know
one
is
a
personal
email
or
a
secondary
versus
you
know
a
primary
like
work
related
one.
D
Certain
attributes
in
the
schema
such
as
username
are
required.
Id
is
also
required
and
then
there's
in
whenever
results
are
recreated
from
a
server.
There
are
also
some
attributes
that
are
returned
back
that
are
read-only,
so
there
there's
this
whole
complex
meta
attribute
with
some
metadata
about
the
object
and
then.
D
See
up
at
the
top
the
schemas
returned
tells
what
schemas
are
being
used
for
this
object,
so
you
can
see
in
this
example
the
core
2.0
user
schema
and
the
extension
enterprise
2.0
user,
which
are
you
know
the
two
different
user
schemas
in
the
in
the
standard.
Next
slide.
Please.
D
And
here's
just
a
quick
list
of
the
data
types
you
got.
You
know
your
old
favorites
strings,
booleans
decimal
integer
date,
time
binary
and
then
the
the
two
that
are
you
know
worth
talking
about
a
little
bit
here
are
reference
and
complex
complex.
We
just
saw
a
couple
examples
of
again:
it's
just
a
collection
of
one
or
more
simple,
valued
attributes
grouped
together
and
then
reference
is
a
pointer
to
a
resource
somewhere.
D
So,
for
example,
the
standard
defines
a
an
attribute
for
photos
for
like
a
profile
picture,
type
concept
and
a
url
will
be
included.
You
know,
https
console
whatever.jpg,
and
so
the
the
reference
attribute
is
a
reference
to
a
uri
pointing
to
a
resource
and
with
that
next
slide,
please
and
back
to
janelle.
C
C
Sometimes
the
spec
isn't
100
clear
in
certain
areas,
and
this
leads
to
some
interoperability
challenges,
and
then
you
may
have
a
client
side,
skim
client
that
customer
that's
interacting
with
the
server
and
they
come
into
well
having
arguments
over
how
they're
implementing
the
spec
and
then
they
realize
that,
actually,
it's
not
clear.
Sometimes
in
the
spec,
there
is
limited
guidance
on
certain
certain
aspects,
such
as
groups
and
roles,
how
you
handle
entitlements,
and
some
of
the
attributes
are
not
clearly
defined
in
in
7643.
C
So
we
seek
to
improve
the
usability
of
the
spec
in
general.
We
are
looking
to
make
improvements.
Some
some
companies
have
noticed
some
problems
with
the
bulk
operations
and
and
how
they
operate.
Pagination
has
some
limitations.
C
Some
people
would
like
to
extend
the
core
schema
and
then
you
know
there
are
some
new
and
emerging
concepts
that
happen
in
draft
that
are
actually
in
use
in
draft
such
as
limited
privileged
access
management.
We
seek
to
to
move
that
from
draft
to
standardize
and
address
whatever
issues
it
has.
So
these
are
the
types
of
papercuts
the
paper
cuts
we
have
a
link
to,
and
we
welcome
other
organizations
to
share
their
experiences
with
skim
as
well
and
and
and
show
that
with
the
working
group.
C
C
There
is
some
notion
of
that
had
been
in
previous
drafts
regarding
soft
delete
and
how
to
handle
that
in
skim
as
well,
and
then
also
looking
to
enhance
the
schema
in
certain
areas
for
for
data.
That's
handled
by
hr
systems,
enterprise
group
data,
as
well
as
privileged
access
management,
advanced
automation
scenarios.
There
are
several
of
those
that
we
could
wax
and
lean
on
for
quite
some
time
and
then
enhance
data
handling
for
larger
sets.
This
could
be
the
pagination.
This
could
be
other
things.
C
C
Well,
nancy:
it
seems
that
we
can
move
on
to
the
next
agenda
person
and
then
there'll
be
more
time
and
questions
on
something
meatier
than
the
intro.
How.
A
We
can
proceed
well,
I
mean
at
some
point
the
group
will
need
to
decide
on
the
content
of
how
we
want
to
go
about
updating
both
the
schema
and
the
protocol
specs
right.
B
Perfect
and
you're
willing
to
place
a
slide
adventure
for
me.
Of
course.
Okay,
that's
fantastic!
Thank
you
for
all
your
work
and
we
really
appreciate
it.
Hi
everybody.
My
name
is
pam
dingle.
I
work
for
microsoft
as
well.
B
If
you
haven't
heard
of
7642,
I
wouldn't
be
terribly
surprised
if
you
want
to
advance
to
the
next
spec,
so
there
are
actually
three
specifications
that
were
published
as
a
family
in
that
2015
time
frame,
76,
43
and
44
were
the
the
core
schema
and
the
core
protocol.
B
This
document,
7642,
is,
has
the
title
of
definitions,
overview
concepts
and
requirements.
So
it's
a
non.
You
know
to
use
what
I
think
is
the
right.
Spec
terms,
it's
a
non-normative
document.
It
doesn't
exactly
instruct
you
how
to
use
how
to
use
things,
but
it
is
meant
to
you
know
the
goal
of
it.
As
I
understand
it
is
for
someone
to
be
able
to
read
this
and
then
have
an
easier
time
understanding
what
to
do
with
the
actual
core
specifications.
B
B
Yeah
yeah,
so
I
guess
that
means
that
I
can,
you
know,
take
his
name
in
vain
for
the
next
30
minutes
and
there's
really
nothing.
B
Okay,
don't
put
that
in
the
scribing
all
right,
so
so
what
we
want
to
do
today
is
just
look
at
what
the
fit
for
purpose
has
been
for
76.42
and
at
the
end
of
it.
My
goal
here
is
to
ask
whether
people
have
interest
in
participating
in
an
effort
to
revise
this
document.
B
So
just
keep
that
in
mind
as
we
speak:
informational
status,
sorry,
roman
in
the
chat
added
the
official
correct
terms.
So
it
is
an
informational
status
versus
a
proposed
standard,
which
is
what
76,
43
and
44
are
okay
great.
So
let's
dig
then
into
what
what
does
7642
talk
about?
B
So
one
of
the
major
things
that
7642
talks
about
is
it
tries
to
create
the
names
of
actors
that
have
relationships
to
each
other
in
in
the
skim
specification
or
you
know,
or
that
could
com
comprise
use
cases
that
use
the
skim
specification,
and
so
they
just
describe
three
different
actors:
a
cloud
service
provider,
an
enterprise
cloud
subscriber
and
a
cloud
service
user.
Now
this,
of
course,
is
now
slightly
dated
terminology.
B
In
that
you
know
the
the
a
lot
of
the
implications
in
the
document
are
that
enterprise
cloud
subscribers
always
have
certain
patterns
of
usage,
and
I
think
those
patterns
of
usage
are
slightly
have
proven
you
to
not
pass
the
test
of
time,
but
essentially
what
you're
seeing
here
you
know,
this
sort
of
diagram
that's
been
provided.
Is
this
idea
of
a
cloud
service
provider
as
essentially
a
multi-tenant
platform
with
many
ecs's
or
enterprise
cloud
subscribers
of
subscribing
and
then
in
turn
those
enterprise
cloud?
Subscribers
have
users
attached
now.
B
The
interesting
thing
about
this
is
that
that
you
know
there
is
no
exact
correlation
here
to
a
subscriber
versus
a
provider
in
the
actual
76,
43
and
44
language.
So
the
word
subscriber
is
not
a
term
in
the
protocol
at
all.
As
far
as
I
know,
it's
certainly
not
in
the
definitions
in
rfc
7643
and
so
there's
some
confusion
and
ambiguity
here
now.
B
The
other
thing
is
that
the
cloud
service
provider,
the
term
service
provider,
is
in
fact
an
important
term
in
7643,
but
as
you'll
see
as
we
go
on,
there
are
times
when
cloud
service
providers
talk
to
cloud
service
providers
and
in
that
case
the
term
service
provider
has
zero
protocol
specific
meaning,
and
so
we
have
a.
We
have
a
collision
in
terms
just
in
the
basic
terminology.
That's
used
in
7642.
B
Let's
keep
going
if
you
want
to
advance
this,
so
here's
where
the
real
confusion
comes
in
so
the
you
know,
we
define
in
7642
triggers
modes
and
use
cases,
so
the
triggers
are
sort
of
the
major
obvious
triggers
that
would
happen
in
a
restful,
a
restful
service,
endpoint,
right
creation,
update
deletion
and
then
there's
a
trigger
called
single
sign-on,
which
is
an
interesting
one.
So
it's
really
meant,
I
think,
to
talk
about
just-in-time
use
cases,
but
the
interesting
thing
nowadays
is
that
just-in-time
use
cases
occur
for
reasons
other
than
just
sso.
B
They
occur
for
things
like
privilege,
elevation
for
example,
and
so
that
term
is
slightly
outdated
now
as
well.
It
was
perfectly
fine
in
the
day,
at
least
in,
in
my
opinion,
and
by
the
way,
I'm
making
what
might
be
provocative
statements
in
the
hopes
that
you
care
to
to
correct
me.
So
you
know
if
you
feel
like
I'm
being
overblown
here,
I
would
love
to
hear
it,
and
I
you
know
I'm
doing
so
kind
of
on
purpose
to
to
see.
B
B
However,
what
the
document
doesn't
do
today
is
translate
the
use
case
of
a
cloud
service
provider
trying
to
create
an
identity
at
another
cloud
service
provider
with
any
kind
of
protocol
terms.
So
you
know
we
all
we
really
get.
Is
this
very
coarse
understanding
that
data
is
being
pushed
from
one
cloud
service
provider
to
another?
And
I
think
all
of
us
here
can
agree
that
for
someone
to
really
understand
how
skim
works,
they
have
to
understand
more
nuance
than
that
right.
They
they.
You
know
the
data
flow.
B
The
direction
of
data
flow
is
important,
but
there
are
a
ton
of
concepts
that
an
actual
implementer
of
the
specification
has
to
understand
and
we'll
go
through
some
of
those
concepts
a
little
bit
farther
along.
But
the
you
know
the
big
one
here
is:
there
is
the
confusion
over
service
provider,
because
in
7643
you
know,
service
provider
is
generally
defined
as
as
being
an
entity
that
runs
a
service
endpoint.
B
The
restful,
endpoint
right
and
a
client
is
defined
as
the
entity
that
is
accessing
that
endpoint,
and
so
you
know
when
you,
when
you
talk
about,
pushes
and
pulls
we,
you
know,
there's
an
implication
here.
That
is
the
client
pushing
to
the
service
provider
or
the
client
pulling
from
the
service
provider,
but
none
of
that
is
is
explicitly
defined.
It's
left
sort
of
for
you
know
for
the
reader
to
you
know
peruse
both.
B
You
know
you
really
have
to
read
all
three
documents
before
you
can
come
to
any
conclusions
which
I
believe
is
makes
this
document
slightly.
Yes,
less
useful,
you
know
the
the
next
question.
Is
you
know
that,
again
with
subscriber
our
is
a
subscriber,
a
client?
We
don't
know,
we
don't
think
so.
You
know
just
based
on
this
list.
B
If
you
look
at
it,
you
know,
if
an
es,
if,
if
an
ecs
is
pushing
to
a
csp,
then
great
that,
yes,
you
know,
ecs
would
likely
be
the
client
and
the
csp
would
likely
be
the
the
service
provider.
But
what
about
the
other
options?
You
know
what?
If
what?
If
we
want
the
opposite
to
happen
right
we
what
if
we
want
the
ecs
to
be
the
service
provider
and
the
csp
to
be
the
client.
Those
things
are
also
possible
and
they're
not
described
here.
B
Just
stopping
anyone
want
to
comment
on
these
use
cases.
Has
anyone
had
experience
in
reading
this
doc?
B
I'm
guessing
people?
Don't
because
I
don't
think
it's
a
you
know.
I
don't
think
it's
a
useful
aid
at
this
time
and
so
we're
sort
of
in
this
situation
where
no
one's
read
it.
No
one
knows
it
exists,
and
hopefully
people
will
feel
the
need
to
be
able
to
come
help
me
improve
it.
B
If
you
want
to
go
ahead
and
see
so,
here's
the
other
question
to
contemplate
right.
How
has
identity
management
changed
in
the
10
years
since
7642
was
started
and
where
we
are
now
you
know
and
how
many
of
those
concepts
are
actually
important
for
implementers
as
they
look
at
the
skim
specification.
B
So
you
know
you
look
at
the
sso
trigger
as
an
example
of
of
how
the
world
has
changed
right.
B
Privileged
access
management
is
now
in
2021,
an
absolutely
critical
part
of
of
most
enterprise
security
regimes,
governance
regimes
and
that
idea
of
a
real-time
ability
to
create
to
quickly
create
an
account
it
existed
in
2011,
but
it
existed
primarily
as
a
a
federation
concept
where
you
know
you,
when
you
arrive
at
a
relying
party,
the
relying
party
would
take
the
data
from
us
from
an
sso
assertion
and
push
it
into
a
database
right,
create
the
user
on
demand
at
that
time,
there's
a
whole
bunch
of
on-demand
capabilities
and
and
use
cases
that
exist
now.
B
You
know
we
have
a
ton
of
workflow
around
privilege
elevation.
That's
super
important
as
well,
so
you
know
cross
domain.
You
know
not
not
just
access
but
elevation.
Those
two
concepts
are
are
one
example
of
this.
Another
interesting
thing
to
note-
and
this
is
more
to
76,
43
and
44-
is
that
web
hooks
didn't
exist.
B
As
I
understand
it,
they
did
not
exist
before
skin
was
created,
so
you
know,
do
we
have
to
think
about
those
kinds
of
you
know,
industry
changes
that
have
occurred
around
us
proof
of
possession
is
another
example.
You
know
we.
We
have
some
basic
requirements
right
now
in
76,
43
and
44
around
oauth,
but
we
don't
have
like
strong
security
recommendations
for
skin
because
those
those
didn't
exist.
We
didn't
have
things
like
depop
at
the
time
that
this
document
was
being
created,
and
so
the
question
then
becomes
are.
B
B
You
know
we
have
a
lot
of
issues
right
now
with
folks
taking
what
I
would
call
the
hello
world
path
to
implementing
specifications
where
they
implement
it
until
it
succeeds
right,
they
implement
it
until
the
user
gains
access,
but
what
they
don't
do
is
check
whether
users
that
shouldn't
have
access,
don't-
and
you
know
this
is
this-
is
a
big
problem
industry-wide
in
my
opinion,
and
it's
something
that
we
should
be
more
what's
the
word,
we
should
be
better
able
to
advise
now,
especially
as
folks
who,
who
are
writing.
B
B
The
other
one
that
I
want
to
bring
up
here
is
maybe
the
simplest
one
which
is
in
2011.
It
was
almost
a
given
that
enterprises
on
premises
would
be
the
heart
of
any
kind
of
provisioning
regime.
I
don't
know
if
anyone
wants
to
challenge
me
on
that,
but
having
been
there
myself
that
that's
my
you
know,
that's
my
my
memory
right
was
that
people
loved
the
cloud
in
limited
doses
and
really
never
thought
that
a
cloud
an
entire
cloud
native
regime
was
anywhere
near
coming
true,
but
we
are
here.
B
We
are
10
years
down
the
line,
and
so
the
idea
of
a
pure
cloud
native
enterprise
is
starting
to
be
true.
There
are
cloud
native
enterprises
today,
and
so
that
idea
of
you
know
all
every
cloud
element
is
is
trying
to
now
negotiate
their
service
provider-ness
versus
their
clientness,
and
that
stuff
is
all
just
changing.
A
B
C
Yeah
pam
just
to
add
on
to
some
of
your
observations
in
the
last
10
years
and
how
identity
has
changed.
You
know
I
think,
when
we
were
looking
at
skim
back,
then
it
was
really
between
you
know.
Maybe
you
know
within
an
organization
to
another
organization,
but
not
this
kind
of
explosion
of
the
propagation
of
the
cloud
and
all
the
service
providers
that
are
out
there
and
all
the
funnels
of
all
the
data.
C
That's
that's
floating
around,
and
so
I
think
that
there's
kind
of
a
narrow
view
just
really
based
on
the
existing
implementations
of
how
people
were
implementing
locally
within
their
organizations,
say
provisioning
services
or
identity
services
for
their
enterprise,
but
then
that
expanded
reach
now
globally.
B
Perfect
sense-
and
you
know
probably
the
last
big
change-
is
that
governance
has
become
a
security
imperative
and
that
you
know
I
would
say
in
2011,
governance
was
primarily
an
accounting
measure.
You
know
you
would
count
all
your
users,
you
would
make
sure
you
had
the
right
users,
you
might
sign
a
quarterly
affidavit
and
what
we're
seeing
now
is
this
huge
push
for
real-time
understanding
of
who
has
access
to
what
system?
B
And
you
know
with
an
expectation
that
if
attackers
are
in
the
system,
there
is
a
detection
that
can
occur
and
a
remediation
that
can
occur
so
instead
of
every
quarter.
Looking
at
your
user
list
for
for
a
financial
resource
and
saying
hey,
john
smith
is
dead,
we
should
really
remove
him
right.
We're
now
in
a
situation
where
attackers
are,
you
know,
gaining
access,
creating
a
user
with
administrative
privileges,
executing
five
commands
deleting
that
user
and
those
kinds
of
activities
now
make
provisioning
far
far
more
important
right.
B
B
So
what
7642
doesn't
do
today,
which
I
think
it
should
is
it
should
talk
about
how
clients
today
implement
bi-directionality,
because
the
truth
is
there
is,
you
know,
we're
not
purely
pushing
and
we're
not
purely
pulling
in
almost
every
circumstance.
We
are,
you
know,
generally
speaking,
there's
a
flow.
A
data
flow
right
where
we
are
you
know,
the
creation
of
accounts
occur
in
one
direction
and
it
may
occur
from
skim
client
to
skin
provider
or
it
may
occur
from
service
from
you
know.
B
From
the
you
know,
the
flow
may
run
from
service
provider
to
skim
client,
but
the
skim
it's
because
the
skim
client
is
pulling
data.
So
understanding
this
diagram
to
me
is
a
big
part
of
what
we
can
do
with
7642
right.
If
you
think
about
how
a
push
pull
works
when
the
data
flow
is
moving
from
skim
client
to
skin
to
service
provider.
Generally
speaking,
what
it
means
is
the
skim
client
is
making
those
create
delete.
B
You
know
patch,
post
or
patch,
pull
you
know,
put
types
of
commands,
but
they're
also
using
get
commands
constantly
right
to
make
sure
that
they
have
the
right
idea
of
what
the
service
provider
knows
to
be
able
to
check
whether
an
incremental
push
is
even
needed
right.
So
it
isn't
just
a
case
of
of
the
data
flow.
It's
a
case
of
the
skim
client
having
to
be
sophisticated
enough
to
know
what
it
can
search
for
when
to
make
for
you
know
to
make
their
queries
efficient.
B
So
you
know
giving
a
better
more
nuanced
view
to
folks
who
want
to
be
a
skim
client
and
who
want
to
be
able
to
update
service
providers
would
be
very,
very
useful
if
you
look
at
the
second
piece
down
here
in
the
same
way,
if
you
are
a
skim
client
who
has
to
pull
data
from
a
service
provider,
there
may
absolutely
be
pushes
that
occur,
and
that
is
you
know
that
comes
to
this
idea
of
multiple
starts
of
authority,
which
I
think
is
another
place
where,
in
the
last
10
years,
we've
gotten
a
lot
more
sophisticated.
B
So
it's
absolutely
possible,
for
example,
in
an
hr
system
that
the
hr
system
is
the
service
provider,
that
you
know
a
cloud
platform,
might
be
pulling
data
from
the
service
provider
and
might
even
be
responsible
for
the
existence
of
the
user,
but
is
going
to
push
back
and
you
know,
is
going
to
play
the
authority
for
an
email
address,
for
example,
and
so
you
know
so
it
just
isn't
as
simplistic
now
as
it
might
have
used
to
be,
and
so
we
absolutely
have
to
understand
how
the
user
is
moving
right,
how
the
accounts
are
being
being
created
through
the
system.
B
B
B
So
you
know
in
this
case
another
you
know
this
could
be
workday
on
the
left-hand
side
and
it
could
be
salesforce
on
the
right-hand
side
and
it
could
be
google
in
the
middle
as
an
example.
So
understanding
where
you
are
in
this
hierarchy-
and
you
know
related
to
your
start
of
authority-
becomes
important
because
it
changes
you
know
just
because
you're
a
client
attaching
to
the
service
provider
does
not
mean
that's
enough
data
for
you
to
understand
what
you
should
do
next,
and
you
know
this
is
also
where
the
pagination
and
synchronization
discussions
come
in.
B
A
You
you
have
15
more
minutes,
okay,
all
right-
and
I
think,
and
dan
janelle
and
danny
for
that.
But
yeah
you
have
15
minutes,
left,
okay,
cool.
B
In
that
case,
so
so,
in
this
case
I've
defined
this
term
of
provisioning
hub.
I
don't
know
that
this
is
the
right
term.
It's
not
a
term.
That's
used
in
the
specifications
today,
but
you
know
if,
if
it's
the
case
that
we
want
these
kinds
of
relationships
to
be
communicated,
then
I
think
terms
like
provisioning
hub
might
be
more
valuable
right.
That
show
that
data
is,
you
know,
sort
of
ingressing
and
egressing,
so
to
speak,
compared
to
talking
about
a
cloud
service
provider,
because
essentially
everyone's
a
cloud
service
provider
today.
B
So
it's
not
super
valuable
to
call
them
that
you
know
I'm
open.
Obviously
you
know
we
can
propose
any
set
of
terms
we
want,
but
I
do
believe
that
some
terms
to
help
understand
the
difference
here
are
important,
and
so
you
know
if
you're
a
for
example,
if
you're
an
implementer
who
is
you
know
who
has
a
a
sas
application
and
wants
to
be
able
to
pull
data
from.
Let's
say
you
know
three
major
cloud
service
platforms
right
pick
any
three
they
may
actually
have
to
you
know
they.
B
They
may
not
be
able
to
play
the
client
in
all
three
situations
and
so
having
them
understand
that
and
understand
that
they're,
essentially,
that
you
know
the
final
stop
on
a
train
is
super
valuable.
If
you
want
to
go
to
the
next
slide
right,
this
is
so.
This
is
one
common
pattern
right
where
the
service
provider
is
the
provisioning
hub.
The
second
common
pattern
is
the
case
where
the
skim
client
is
a
provisioning
hub,
and
in
case
you
think
that
that
there
are
no
examples
of
this
in
the
wild,
microsoft.
B
So
in
this
case
you
know
the
the
you
know,
if
you're,
that
same
person
that
same
implementer
trying
to
set
up
your
sas
app
to
receive
data
you're
going
to
have
to
implement
an
endpoint
versus
implementing
a
client
right
and
and
ultimately
the
decision
you
make,
there
depends
a
ton
on
what
your
downstream
partner
already
has
set
up
and
that's,
I
think,
a
useful
thing
to
also
communicate
here
and
again.
It
has
to
be
bi-directional
right.
B
The
the
the
pattern
that
that
was
brought
up
in
this
case
of
you
know
if
you're
on
the
right-hand
side
here
is
the
as
a
service
provider
on
the
right
is
that
you
may
be
absolutely
oh.
I
just
realized
my
errors
are
backwards,
but
anyways
you
may
be
pulling
data
from
that
or
the
skim
data.
B
Oh,
I
got
a
comment
in
the
chat.
Since
skim
is
an
http
protocol,
the
terms
all
originate
from
http.
Yes,
that's
that's
fair
enough,
but
I
do
think
that
an
overlay
in
this
case
is
is
useful.
I
don't
I
don't
know.
If
there
is
a
there
isn't
really
a
concept
of
chaining.
B
So,
let's
look
at
yes,
perfect,
thanks
nancy!
So
again
these
things
compose
right,
so
the
chains
start
to
compose
as
well,
and
we
start
to
get
maps
of
how
account
creations
right
and
account
updates
ripple
through
a
company
right
or
through
a
system,
or
you
know,
through
a
network,
and
so
what
you
can
see
here
is
I've.
You
know
I've
got
notation
to
denote
where
the
start
of
authority
is
so
in
this
case
the
top
left.
B
Skim
client
is
sort
of
the
origination
of
the
account
and,
as
as
you
know,
the
skim
client
there
pushes
to
the
service
provider.
What
happens
is
you
may
have
a
provisioning
hub
that,
in
fact
can
handle
both
service
provider
and
skim
client
operations
right?
So
in
this
case,
you've
got
a
full
fan
out
effect
right.
B
So
you
you
know,
data
gets
pushed
from
the
original
skim
client
into
the
service
provider
and
then
what
hap
what's
happening
is
some
clients
are
pulling
that
data
in
order
for
them
to
be
updated,
but
the
cloud
platform
may
also
be
pushing
that
data
to
other
service
providers
right
so
now,
you've
got
a
full-on
ecosystem
of
pushes
and
pulls
and
ripples
of
data.
B
The
one
thing
to
note
here
is:
if
the
on
the
you
know
the
service
provider
on
the
bottom
left,
although
it's
drawn
with
a
data
flow
that
implies
that
it's
that
it's
a
downstream
application.
It's
not
right.
If
you
look
at
the
pattern,
the
pattern
for
that
service
provider
is
identical
to
the
pattern
for
the
service
provider
on
the
right.
B
So
essentially,
you
know
the
patterns
start
to
collapse
at
some
point
right,
depending
on
the
on
the
the
direction
and
the
directions
matter,
far
more
about
where
this
big
dotted
line
is
than
about
any
kind
of
overall
direction.
Right
things
start
to
to
blend,
and
if
you
go
to
the
next
one
so
that
you
know
so,
that's
one
case
the
case
where
the
skin
client
is
kicking
off
the
avalanche.
You
know.
The
second
use
case
is
where
the
service
provider
is
kicking
off
the
avalanche
right,
so
a
database
gets
updated.
B
The
data
passively
sits
at
the
service
provider
waiting
and
the
cloud
platform,
as
the
client
then
actively
goes
and
pulls
that
data
from
the
service
provider
and
kicks
off
the
whole
ripple
effect
again
right,
whether
that
whether
there
are
entities
passively
fetching
or
entities
that
are
actively
being
pushed
to
you.
So
to
me
this
is
this,
this
understanding
of
how
all
of
it
works
right
as
this
much
more
complex,
set
of
pulls
and
pushes
are
far
more
important
than
just
understanding.
There's
such
a
thing
as
an
overall
push
or
an
overall
pull
all
right.
B
You
want
to
hit
the
next
one
and
we're
getting
to
the
end
here.
So
then
the
question
becomes:
what
can
we
do
right?
What
should
we
do?
We
don't
have
to
write
an
entire
opus
on
how
skim
works
right.
So
I
think
what
we
want
is
a
minimal
amount
of
data
that
is
going
to
make
it
easier
to
read
the
specification
right
and
so
for
me,
there's
some
basics
here.
First
of
all,
we
have
to
align
the
taxonomy.
B
We
have
to
talk
about
service
providers
and
clients
in
ways
that
are
helpful
to
understand
how
they
work
in
the
specification
this
as
it
exists
today,
7642
never
never
talks
about
service
providers
or
clients,
never
talks
about
resources
or
resource
types.
It
never
talks
about
extensibility
it.
It
never
uses
the
term
provisioning
domain
which
is
defined
in
7643.
B
You
know
it.
None
of
that
is
even
in
the
document,
and
I
think
that's
something
we
can
address,
and
then
you
know
the
the
from
a
use
case
perspective.
Now
we
have
much
more
nuanced
use
cases
that
we
can
discuss
right.
We
can
discuss.
For
example,
you
know
the
7642
doesn't
talk
about
groups
at
all.
B
It
literally
only
talks
about
users
right,
so
just
that
you
know
you
think
of
the
the
difficulty
that
exists
in
understanding
what
to
do
with
large
groups,
right
million
user
or
million
member
groups.
Talking
about
those
use
cases,
I
think,
can
be
very
very
helpful.
Have
people
understand
what
they
need
right?
The
slash
me
endpoint
is
never
discussed.
B
Search
is
never
discussed,
so
you
know
some
number
of
things
we
don't
have
to
exhaustively
drain
it,
but
finding
the
set
of
use
cases
that
actually
give
people
the
whole
picture
of
the
specification
is
valuable.
You
know:
custom
setting
up
custom
resources,
for
example,
would
be
a
very
good
use
case
in
my
opinion,
and
then
you
know,
and
then
there
are
some
of
these
more
advanced
concepts
which
we
can
decide
whether
to
include
or
not
this.
The
idea
of
incremental
attribute
exchange
never
discussed,
and
that's
super
useful.
B
The
you
know
that,
like
put
and
post
to
read
the
spec
and
understand
the
difference
between
a
put
and
a
patch
is,
is
pretty
hard
work,
but
we
could
very
easily
make
it
better
with
a
use
case
that
describes
it
and
then,
of
course
synchronization
is
the
other
one.
That's
a
heavily
discussed
part
of
this
new
skin
3.0
world.
B
A
B
Enough,
no,
no,
no!
That's
fine!
Sorry
phil!
If
you
do,
if
you
do
want
to
come
on
and
make
a
comment,
we
of
course
would
love
it,
but
at
the
end
of
the
day
you
know
here's
what
we
I
think
we
need
right.
B
We
need
something,
that's
easy
to
read
and
that
is
going
to
be
time
well
spent
for
anyone
who
takes
the
time
to
look
at
it
and
then
something
that
in
fact,
is
going
to
change
how
people
read
the
specification
in
a
positive
way
right
and
then
the
last
thing
is
just
to
make
sure
that
there
is
nuance
that
that
can
be
gleaned
out
of
this
document
that
helps
them
understand
complexities
as
they
look
at
the
spec,
and
I
think
that's
it.
I
will
I
will
leave
it
there.
B
Hopefully
this
is
useful
for
folks,
but
I
I
would
like
to
ask
for
volunteers,
so
you
know
I
am
committed
to
trying
to
put
a
draft
together
as
soon
as
I
can
and
submit
it
to
this
working
group
for
consideration.
B
Is
you
know
if
there's
anyone
who's
interested
in
being
part
of
this?
You
can
respond
on
the
email
list.
I'm
assuming
you
could
contact
nancy
or
barry.
You
can
speak
up
now.
A
Well,
so
this
is
a
little
disconcerting,
so
phil,
if
you
have
no
audio
pam,
you've
got
four
minutes
left.
So
if
anybody
has
comments
or
feedback
or
additions
or
a
different
perspective
to
what
pam
presented,
we
can
discuss
it
in
the
next
four
minutes.
A
Pam,
you
have
a
volunteer
in
janelle.
Yes,.
A
F
We
could
try
getting
me
on
the
phone
with
phil
and
seeing
if
it
will
play
through
my
microphone.
A
So
pam,
I
think
this
is
a
really
good
start
at
the
modernization.
It
looks
like
you're
introducing
some
new
terms
as
well.
That
could
help.
B
Yeah,
I'm
hoping
I'm
hoping
to
for
guidance.
I
mean,
I
think
what
we
can
do
now
is
revive
seventies.
You
know
revise
this
use
cases
and
definitions
document
to
point
to
the
current
skim
document,
so
that
they're
a
pair
and
then
in
theory,
there's
a
possible
revision
that
has
to
happen
at
the
point
where
we,
where
76,
43
and
44
are
subtly
changed
right.
If
we,
if
we
decide
to
subtly
change
them
right,
so
I
you
know,
I
feel
like
this
work.
B
B
As
well,
that
makes
sense,
I
think
I
mean
I
think
what
the
you
know
the
group
can
do
in
general
is
also
keep
us
from
boiling
the
ocean
right,
there's,
definitely
a
line
that
has
to
be
drawn
as
to
what's
useful
for
people
to
know
and
what's
too
much
information,
so
I
think
that'll
be
yeah.
A
A
Okay,
I
presume
I
should
go
ahead
and
share
your
slides
now,
so
thank
you,
pam
and
so
just
to
close
on
the
use
cases,
you
can
also
contact
pam
directly
as
she's
volunteering,
to
put
the
proposal
for
updating
the
use
cases
draft
feel
free
to
contact
me
as
well,
but
also
as
she
is
taking
on
the
pen
feel
free
to
contact
her
directly
to
get
more
involved.
A
E
On
apparently,
safari
latest
doesn't
work
and
firefox
isn't
even
talking
with
my
os,
so
who
knows
so
chrome's
working,
okay
yeah,
so
this
draft
came
out
essentially
after
skim
was
published,
and
the
problem
that
came
up
was
that
initially,
when
you
have
very
large
groups,
sometimes
you
just
want
a
sample
of
the
data
and
the
idea
was
hey
phil.
How
can
we
page
multi-value
attributes
and-
and
I
was
a
little
confused-
that
I
noticed
people
on
the
email
list
are
confused.
E
We
want
we
want
to
page
the
values
or
say
I
want
a
range
of
values
by
index
rather
than
paging
resources,
which
is
the
normal.
What
the
skim
protocol
does
it
lets
you
page
through
a
set
of
resources
that
might
be
returned
from
a
query
in
this
case
the
intent
is.
I
want
to
select
a
set
of
rows
from
a
multi-value
attribute
to
return,
so
this
spec
provides
two
types
of
ways
to
select
those
values
either
by
filter
or
by
index
essentially,
and
to
help
avoid
some
confusion.
E
I've
changed
the
name
to
multi-value
filtering,
because
this
spec
kept
getting
rolled
up
into
the
stateful
page
discussion,
which
really
is
a
separate
thing.
So
I
think
we
can
go
to
the
next
slide.
So
this
this
this
draft
was
submitted
after
the
group
finished
its
charter
and
that's
why
it
sat
around
for
a
few
years
really
hasn't
had
any
discussion.
E
I
believe
it's
called
a
value
path
and
really
what
it
is
is
normally,
you
might
say,
emails
equals
phil,
dot,
hunt
yahoo.com,
but
really
what
you're
trying
to
say
is
if,
if,
if
you
want
a
way
to
say
if
the
emails.type
equals
work
and
emails.value
ends
with
yahoo.com,
I
want
to
return
the
value
and
we
introduced
in
the
skim
protocol
spec
the
value
path
notation,
which
is
to
put
square
brackets
after
an
attribute,
so
that
you
could
execute
the
filter
within
the
context
of
that
multi-value
complex
multi-valued
attribute.
E
So
I
can
say,
email
square
bracket,
type,
equals
work
and
value
ends
with
yahoo.com
would
select
a
particular
row
from
that
attribute
and
the
thinking
here
was
why
don't
we
use
that
notation
on
the
attributes
list
so
that
I
can
tell
skim
that
when
it
returns
that
attribute,
I
just
want
a
particular
row
of
data
or
a
particular
sub-attribute
of
data
from
that
table.
Sorry
table
it's
from
that.
Json
object
that
we
have.
E
E
The
service
provider
can
add
a
mata
attribute
meta.attribute.count
to
indicate
what
the
number
of
rows
are
for
that
attribute
you
can
decide.
My
thinking
was.
Is
it
would
be
returned
only
when
you
invoke
this
type
of
request,
but
that
we
can
discuss?
I
don't
know
that
you
need
to
add
it
for
every
single
request
that
comes
in
we'll
see
the
example
request
coming
up,
because
this
is
going
to
be
an
enhancement
to
skim
and
it's
its
own
rfc.
E
E
And
you
can
see
that
that
emails
is
returned
and
there's
only
one
value
returned
and
it's
the
one
that
corresponds
to
type
equals
work
as
requested
and
the
email
stock
count.
This
is
probably
a
bad
example.
I
should
have
said
two
or
more,
because
the
example
would
be
that
it
assumes
that
there's
two
or
three
email
addresses
and
the
count
would
reflect
the
number
of
actual
email
addresses
available.
E
E
So
if
I
wanted
to
do
multi-value,
paging
and
as
I
mentioned,
the
term
might
not
be
the
best
because
we're
confusing
it
with
resource
results.
Paging
in
this
case
we're
doing
members
type
equals
group
and
count
equals
five
and
start
index
is
one.
E
So
if
I
had
a
type
of
group,
in
other
words,
a
group
within
a
group,
I
want
to
know
the
first
five
members
where
the
type
equals
group
and
we
return
that
value.
So
I
don't
have
an
example
for
this,
but
that
would
be
how
you
would
specify
a
range
in
this
case,
and
it
just
follows
the
same
pattern:
we've
used
for
resource
paging.
We
can
now
use
in
the
attributes,
parameter.
E
So
the
spec
is
fairly
straightforward.
Some
things
to
clean
up
is
whether
you
want
to
have
that
count
of
values
that
are
actually
available.
E
E
I
do,
however,
think
it's
just
it's
since
you're
further
restricting
information
within
the
skim
protocol,
spec
you're
actually
narrowing
the
exposure
information,
so
I
do
think
that
these
will
be
relatively
straightforward
and
they'll
follow
the
same
security
and
privacy
considerations
that
rfc
7644
has
we're
not
introducing
any
new
exposures
here.
E
Oh
yes,
yes,
I
think
that
was
yeah.
That's
where
I
was
going
was
depending
on
where
the
working
group
is
going
with.
If
you,
if
we're
doing
a
skim,
v2
bis
or
something
like
that,
then
there's
no
need
for
this
to
be
a
separate
rfc.
I
would
support
rolling
it
in
with
an
enhanced
skin
draft.
E
If
we're
moving
7644
on
to
its
next
phase
of
formalization,
then
we
couldn't
really
add
this
new
feature.
Indirectly,
it
would
have
to
remain
an
extension
draft.
E
So
that's
probably
the
biggest
discussion
is
what's
the
process
for
moving
this
forward.
Do
we
roll
it
into
the
core
spec?
Somehow,
because
we're
updating
the
course
back
or
do
we
leave
it
as
a
separate
enhancement.
E
Yeah
yeah
yeah
and
that's
another
reason
why
I
didn't
want
to
do
all
the
final,
the
final
security
consideration
stuff
because
it
really
depends
on
where
we
roll
it
next
so,
and
I'm
flexible
as
the
author,
I
I
don't
know
if
it's
useful
to
just
say
for
the
process
wise
for
the
group
to
adopt
it,
knowing
that
its
final
disposition
is
not
known,
but
I
I'm
wondering
if
it
helps
engage.
E
A
E
So
so
I
can
say
my
colleague
at
oracle,
greg
wilson
who,
who
I
was
hoping
could
be
here
today,
did
say
that
oracle
intends
to
to
implement
it
shortly.
A
E
A
D
Interested
all
right
I
mean,
but
I
I
I
haven't,
read
the
draft
actually
so
there's
a
strong
difference
between
the
two
yeah,
so
I
I've
written
two
different
drafts.
The
drafts
themselves
are
sort
of
rough.
This
is
my
first
go
around
writing.
D
D
So
the
first
draft
is
on
a
concept
that
I've
tried
to
title
verified
domains
and
in
the
cloud
there's
you
know
a
lot
of
different
services,
particularly
you
know,
like
email
providers,
anything
with
like
a
hint
of
security
to
it.
They
they
they
like
to
require
that
ownership
of
domain
name.
So
you
know
at
contoso.com
is
verified
before
you
can
add
it
to
your
your
sas
tenant
and
that's
to
you
know,
obviously
for
prevent
impersonation.
D
That
sort
of
thing
like
me
as
an
individual
without
proper
proof,
I
shouldn't
be
able
to
go
and
register
you
know
with
with
google
and
set
up
mail
for
them.
So
obviously
you
know
there's
other
things.
Dns
stop
a
lot
of
these
things,
but,
like
you
know,
mail
routing,
but
so
the
the
problem
that
comes
into
play
here
is
when
a
skim
client
is
in.
You
know
the
cloud
world
is
trying
to
provision
a
large
set
of
users
to
into
a
into
the
the
skim
service.
D
So
let's
say
it's
an
email
provider
or
something
of
that
sort
or
any
collaboration
platform.
The
username
in
many
cases,
ends
up
following
that
userat
domain.com
format
and
what
is
currently
undetectable
based
on
the
skim
standard,
is
hey.
What
format
of
name
does
the
skim
service
require?
Because
some
just
do
a
simple.
You
know
it
might
be
danny
and
some
might
be.
D
You
know
danny
domain.com
and
so
being
able
to
determine
that
is
helpful,
but
then
also
knowing
for
any
instances
where
that
you
know
user
domain.com
format
is
being
used.
Is
the
the
right-hand
side
of
that
the
domain.com
part?
Is
there
some
sort
of
verification
mechanism
in
place?
Because
if
there
is
then
what
happens
is
if
the
client
has
you
know
data
coming
into
it
that
doesn't
align
with
what's
allowed
on
the
on
this
given
service
provider,
the
client
will
send
requests
that
fail.
D
If
I
try
to,
you
know,
create
a
user
using
a
domain
suffix
that
I
have
not
proven
ownership
of.
In
that
collaboration
platform
it
will
fail
and
that's
a
it's,
an
unnecessary
use
of
resources
for
both
the
client
and
the
service
provider,
and
it's
something
that's
easily.
You
know
detectable
and
avoidable,
or
it
should
be
so.
D
The
the
the
entire
idea
of
this
draft
is
to
add
a
slash,
verified
domains
endpoint
so
that
so
that
any
sort
of
skim,
client,
interacting
with
the
service
writer,
can
read
the
list
of
allowed
domains
as
well
as
some
service
writer
config
stuff,
to
see
whether
or
not
you
know
this
whole
draft
has
been
implemented
and
that
will
allow
the
client
to
act
intelligently
and
avoid
sending
requests
that
will
obviously
fail
next
slide.
Please.
D
So
yeah
the
two
key
components:
we
have
the
verified
domains
resource.
I
just
mentioned
that
so
I've
written
this
with
it
only
being
read
only
so
only
http
get
there's
some
interest
out
there
with
parties
that
I've
talked
to
about
certain
sas
platforms,
having
an
option
to
trust
domain
verification
from
major
idps
that
are
connected
to
them.
I
left
that
completely
to
the
side.
I
think
that's
probably
a
separate
problem
to
be
solved
or,
aiming
you
know,
an
extension
to
an
extension
or
just
a
completely.
D
You
know
separate
draft
because
there's
a
lot
of
security
considerations
there.
How
do
you
stop
a
bad
actor
from
trying
to
impersonate
the
you
know?
A
trusted
skim
client
from
the
you
know
trusted
from
the
perspective
of
the
the
service
provider,
and
I
don't
have
the
security
background
to
write
that
so
and
then
we
have
the
service
writer,
config
extension
just
to
advertise.
Is
it
available
and
a
couple
of
key
you
know
things
of.
Is
this
specific
thing
you
know
enabled
or
not
next
slide,
please.
D
So
a
quick
run
through
the
schema
of
the
verified
domain
object,
so
we
have
a
domain
name
so
fairly
straightforward.
It's
a
string
containing
at
least
the
second
level
domain
and
top
level
domain.
So
that
would
be
you
know.
Domain.Com
of
the
verified
domain.
D
Subdomains
are
supported
as
well,
so
that
would
be,
you
know,
x,
dot,
domain.com
and
you
know
recursively.
I
think
it's
the
right
word
down
as
well.
So
if
you
don't
want,
you
know
a.b.domain.com.
That
would
also
be
allowed.
D
There's
then
a
boolean
that
says
allow
subdomains,
which
is
to
really
just
advertise
actually
about
that
piece
that
we
just
that
I
was
just
talking
about
the
of
sub
domains.
There
are
instances
where
there's
a
single
organization
may
have
many
instances
of
a
like
a
cloud
platform
in
play.
It
may
be
split
by
you
know,
region
by
organization.
D
Inside
of
the
company
there's,
you
know
a
whole
bunch
of
different
ways
that
you
can
split
up
your
user
base
right
and
so
ownership
of
the
top
level
domain.
Let's
say
domain.com
doesn't
necessarily
mean
that
all
possible
subdomains,
you
know
infinitely
under
that,
should
be
allowed
as
well.
At
least
you
know,
at
least
it
should
be
an
option
to
say
you
know,
owning
having
domain.com
verified
on
your
tenant
does
not
allow
you
to
also
have
you
know
a.domain.com.
D
It
only
gives
you
that
top
down.
You
know
that
top
level
or
whatever
is
explicitly
returned,
rather
than
anything
under
it
as
well.
That
way,
if
you
have
you
know
north
america,
in
in
one
sas
environment,
europe
and
another
apac
and
another,
you
they're
all
separated
and
ownership
of
the
top
level
domain.
Doesn't
you
know
create
problems
if
it
doesn't
actually
grant
the
full
set
of
subdomains
under
it?
D
And
then
the
the
last
attribute
in
the
schema
at
least
outside
of
some
core
attributes
like
id,
would
be
the
verified
date.
This
one's
optional?
It's
you
know
an
informational
piece
to
know
how
long
or
you
know
when
was
the
domain
verified
in
this
service,
so
yeah
for
use
cases
where
it's,
where
it's
valuable
next
slide.
Please.
D
D
There's
then
a
complex
attribute
of
username
properties
and
it
has
two
sub
attributes
and
I'm
not
sure
if
rfc
5321
is
the
right
one
to
call
so
you
know
anybody
want
to
correct
me,
go
for
it,
but
one
is
to
say:
does
the
username
follow
that
userat
domain.com
format
and
then
the
second
one
is
if
it
accepts
that
domain.com
format,
rather,
if
it
requires
that
it
is
the
domain
suffix
required
to
be
verified,
so
this
is
to
allow
a
skim
client
to
sort
of
programmatically
or
automatically
figure
out
that
it
needs
to
act
intelligently
here
and
not,
and
you
know
any
possible
value
for
username
isn't
acceptable
and
then
the
last
attribute
and
service
data
config
is
emails,
verified
domain
required
and
that's
essentially
the
same
as
that.
D
Second
sub
bullet
point
on
username
properties.
This
game
standard
says
that
you
know
like
76,
43
and
44
already
calls
out
that
the
emails
dot
value
attribute
needs
to
be
in
that
rsc
5321
format.
So
it
didn't
make
sense
to
to
make
it
complex
and
have
advertised
because
it
should
always
be
true,
but
the
other
those
two
are
essentially
to
say
both
for
username
and
for
emails.
D
You
know
two
separate
attributes:
do
these
require
that
the
domain
suffix
on
the
username
or
email
value
do
they
need
to
be
verified
with
the
service
writer
next
slide?
Please.
D
So
here
are
some
open
questions.
One
I
just
mentioned
is
5321
the
correct
rc
2.2.
For
that
format,
I
you
know
took
a
stab
at
it.
I'm
sure
there'll
be
more
revisions
of
this
draft
before
we
try
to
you
know
if
we
ever
try
to
move
it
towards
adoption.
D
If
I'm
wrong,
please
let
me
know
for
the
last
subdomains
thing
is
the
purpose
of
it
clear:
should
that
be
moved
to
optional
and
then
guidance
provided
specifying
that
the
value,
if
not
provided,
is
assumed
to
be
true.
So
if
you
don't
say
that
sub-domains
are
not
allowed
just
it's
assumed
that
they
are.
D
D
Anybody
who
reviews
this,
if
you
have
feedback
on
how
I
can
make
any
of
the
wording
clear,
please
I
would
love
to
talk
with
you
yeah
and
that
that's,
I
think
about
it,
for
for
this
draft.
If
you
could
move
to
the
next
slide,
please.
D
And
if
anybody
has
questions
at
any
time,
just
join
the
queue
pam
just
joined
the
queue.
B
D
D
Yes,
but
I
think
that's
achievable
with
the
current
draft,
so
the
the
boolean
would
be.
If,
if
I
have
you
know,
can
I
guess
and
and
maybe
the
wording
in
the
draft
needs
to
be
refined,
but
the
the
idea
at
least
was
that
that
it's
to
say
are
subdomains
of
the
values
returned
allowed.
D
So
if
you
know
domain.com
is
what's
returned
and
you
know
all
sub
domains
are
not
allowed,
then
you
should
only
you
know,
you
would
say
no
allow
subdomains
false,
and
this
is
on
a
per
resource
return.
So
every
domain
return
would
have
its
own
value
of
for
allow
sub-domains,
but
the
the
service
writers
are
good.
E
D
Writer
could,
alternatively,
just
return
the
three
subdomains,
so
if
it
was,
you
know,
a.domain.com,
b,
dot,
domain
c
dot
domain,
it
could
return
the
three
sub
domains
and
explicitly
just
these
are
the
subdomains
that
are
allowed.
B
I
so
essentially,
what
that
does
is
allows
or
prevents
cascading
subdomain
assumptions
right.
Okay,
I
get
it.
D
Yeah,
so
if,
if
it's
everything
you
can
just
do
like
domain.com
and
say,
allow
subdomains
and
they
get
everything
underneath
if
it's
specific
subdomains
then
rather
than
listing
the
topleveldomain.com,
you
can
just
list
or
return
back
the
subdomain.top.
You
know
uh.whatever.com
in
a
list
of
whatever
the
the
allowed
ones
are.
D
Okay
and
so
yeah,
the
the
second
draft
sort
of
aims
to
solve
that
same
general,
like
sassy
problem
of
clients,
interacting
with
skim
service
providers
want
to
know
like
they
want
to
be
able
to
act
intelligently
and
not
waste
time,
calculating
requests
that
are
going
to
fail,
and
so
the
discoverability
of
information
relevant
to
whether
or
not
a
request
fails
is
important.
D
In
my
opinion,
and
so
in
this
case
I'm
I
wrote
a
a
draft
jointly
covering
adding
two
new
resources
for
roles
and
entitlements,
and
these
are
meant
to
mirror
acceptable
values
for
the
user
resource
object
for
the
the
roles
and
entitlements
attributes
in
the
the
core
schema.
D
So
discovery
of
the
acceptable
values
allows
us
to
not
send
as
as
the
client
it
allows
the
client
to
not
send
an
invalid
request
and
by
you
know,
pulling
that
list
back.
You
can
know.
Okay,
these
are
the
accepted
values.
Anything
else
won't
work
and
then
the
ability
to
discover
roles
and
entitlements
and
what
values
are
allowed,
because
they're,
this
draft
is
going
to
go
into
returning
them
as
complex
objects.
So
you
know
both
value
and
display.
D
It
also,
then,
gives
the
clients
a
little
more
flexibility
in
helping
the
like,
whatever
whoever's
configuring
it
to
assign
the
the
roles
or
entitlements
inside
of
the
client
so
that
they
can
get
the
correct
values
on
the
correct
objects
in
the
service
provider.
D
So
the
the
end
state
of
that
being
a
client
could
go
and
read.
You
know
they
could
do
a
get
on
just
the
base
of
slash
roles
or
flash
entitlements,
get
a
full
list
of
the
available
values
back
and
then
represent
them
in
a
ui
somewhere
and
allow
them
to
be
assigned
to
other
resources
as
appropriate.
Just
so,
there
could
be
like
a
direct
mapping
of
you
know
of
you
know
these
objects
get
this
role
or
what
not
next
slide.
Please.
D
Yeah
so
the
key
components,
slash
roles,
slash
entitlements
and
a
servicewriter
config
to
announce
what
is
available
next
slide.
Please.
D
D
The
the
resources
proposed
here
would
have
value,
display
and
type
which
all
originate
from
the
sub
attributes
on
the
user
object,
as
well
as
one
new
attribute
specific
to
this
draft,
which
is
enabled
which
allows
the
the
service
writer
to
announce
that
this
role
or
entitlement
exists,
but
is
not
currently
in
a
state
that
can
be
used
next
slide.
Please.
D
So
here
is
an
example
of
service
writer.
This
is
mirrored
for
entitlements,
so
you
can
just
sort
of
you
know,
replace
all
for
the
word
roles
with
entitlements,
and
I
I
think
it
applies.
So
it's
a
complex
value
for
you
know,
for
roles
inside
the
service
config
enabled
is
this.
Is
this
feature
added?
So
we
have
an
enabled
for
roles
and
enabled
for
entitlements
are
multiple
roles
supported
some
service
providers
only
allow
a
single
value
for
roles,
whereas
some
allow
multiple.
D
So
since
the
standard
today
doesn't
have
an
easy
way
to
determine
that
that
I'm
aware
of,
why
not
include
it
here
and
then
additional
just
advertisement
of
what
does
the
service
writer
support?
D
The
the
7643
says
that
these
sub
attributes
on
roles
are
optional,
so
primary
and
type
we
we're
allowing
the
service
provider
to
advertise
whether
or
not
they
support
them,
and
all
this
is
just
sort
of
adding
discoverability
around
these
for
the
client
to
know
how
best
to
interact
with
the
service
writer
that
way,
if
they
don't
support
type
or
they
don't
support
primary
there's,
no
sense
in
calculating
them
and
including
them
in
the
request.
D
So
open
questions
on
these.
How
widely
adopted
is
the
type
sub
attribute
for
roles
and
entitlements?
I
in
my
own
experience,
which
is
only
one
person,
I've,
seen
implementation
of
primary
of
value
and
of
display.
D
I
I'm
not
sure
if
I've
seen
implementation
of
type
used
for
roles
or
entitlements,
so
I'm
very
curious
just
on
what
implementations
are
out
there,
and
this
raises
questions
should
be
considered
like
if
we
were
to
include
discoverability
of
available
types,
because
the
7643
does
not
list
any
canonical
values
for
for
type
for
roles
or
entitlements
should
type
be
advertised.
D
Should
the
values
that
are
allowed
for
type
be
advertised
at
sort
of
a
a
global
level,
so
perhaps
in
servicewriter
config,
or
should
the
available
types
be
advertised
on
a
per
role
or
entitlement
value
level
and
then
adding
just
sort
of
extra
questions
here,
even
though
they're
not
all
relevant
to
this
draft
specifically,
has
there
been
any
implementation
of
type
for
roles
or
entitlements?
Should
it
be?
Should
it
remain
in
the
core
theme
adopt?
D
If-
and
this
goes
back
to
the
question
that
that
phil
had
raised
earlier-
if
we're
just
sort
of
you
know
giving
a
once-over
on
76.40
to
43-44
and
sort
of
trying
to
push
it
forward
to
like
a
a
real
standard
versus
proposed,
then
we
probably
can't
make
that
change.
But
if
we're
going
towards
a
new
rfc
with
some
major
changes,
you
know
that
sort
of
thing
it.
I
think
it's
a
worthwhile
question
of.
D
Has
there
been
any
implementation
of
this
and,
as
you
know
like,
is
there
a
value
in
it
being
there
and
then
there's
a
question
that
I
I
now
have
floating
around
based
on
a
previous
conversation
I
had
with
phil:
should
this
be
a
standalone
extension
or
get
merged
in
the
course
fema
docs?
This
again
falls
to?
Are
we
just
you
know,
tidying
up
the
existing
ones
or
going
hard
on
you
know,
making.
D
I
I
think,
from
sort
of
an
ease
of
like
access
thing
and
the
timelines
of
how
soon
can
we
improve
this?
The
standalone
draft
makes
sense,
in
my
opinion,
and
potentially
it
could
always
be
obsoleted
later
and
moved
into.
D
A
A
Right,
I'm
glad
you
put
the
the
question
at
the
end
danny,
because
that's
what
I
was
gonna
ask,
so
it
seems
again
much
to
what
we
did
with
phil's
draft.
What
we
can
do
is
on
the
mail
list.
Ask
for
interest
on
the
actual
topics
of
the
two
drafts
before
we
can
sort
out
whether
they
become
stand-alone
drafts
or
they
become
part
of
the
core
schema
right.
D
Right:
okay!
Well,
without
any
questions,
I
believe
it's
back
to
you,
nancy.
A
A
All
right,
so,
thank
you.
Everyone
for
putting
forward
draft
proposed
concepts.
I
will
say
so
with
that
in
mind,
we
talked
about
you
know
putting
proposals
together
that
can
serve
as
documents
that
we
can
adopt
to
move
forward.
A
So
there
was
a
question
on
the
list
about
how
we
could
move
forward
with
some
of
the
tools
we
talked
about
it
at
the
virtual
interim
informal
session
a
couple
of
weeks
ago,
and
I
brought
up
the
notion
of
using
github
a
lot
of
the
working
groups
at
the
ietf
use
that,
as
a
way
of
openly
sharing
the
work
that
the
working
group,
the
documents
that
the
working
group
is
is
focused
on.
A
It
also
allows
us
to
have
that
version
control,
but,
more
importantly,
for
myself
and
the
authors
and
editors
of
the
drafts
is
that
we
can
interactively
and
on
at
live,
do
the
issue
tracking
issue,
tracking
suggestions
and
pull
requests.
If
you
will,
the
itf
has
provided
guidance.
I've
put
the
link
in
here,
rfc
8874,
there's
a
tutorial
there
as
well,
and
so
as
chairs.
What
we
can
do
is
provide
a
skim
working
group
repository
there.
A
A
Potentially,
I
could
create
a
sub-repo
that
are
for
proposed,
but
I
wanted
to
to
bring
that
up
to
you
guys
as
we
work
through
consensus.
So
my
proposal
is
going
to
be
that
we
use
github.
A
So
that
said,
independent
of
the
interactive
communication
channel,
that
being
slack,
we
officially
will
run
we
meaning
the
chairs,
we'll
officially
run
the
consensus,
calls
if
you
will
through
the
skim
mail
list.
So
I
still
encourage
everyone
to
use
that
mail
list
for
any
issues.
Questions
topics
that
we
want
to
raise
and
also
pay
attention
there,
as
whatever
decisions
we
may
make.
Even
in
these
plenary
sessions,
they
will
get
confirmed
on
the
mail
list.
A
So
I
will
give
a
minute
or
two
to
see
if
there
are
any
comments
or
feedback.
First,
on
the
github.
A
D
Not
github,
would
it
just
be
like
people
submit
drafts
to
the
itf,
or
is
it
all
just
like
decentralized.
A
A
And
as
we
provide
comments
and
create
issues
that
we
want
to
address
in
the
drafts,
those
we
can
track
through
github,
so
the
ietf
does
have
an
issue
tracker,
but
I
think
most
everyone
today
uses
other
mechanisms
in
the
other
working
groups
that
I
chair.
We
we
use
github
and
that's
where
we
track
the
issues,
a
combination
of
of
the
github,
as
well
as
on
the
mail
list.
A
A
D
Yeah,
like
it
was
already
clear
that
the
like
final
steps
would
be
submitting,
but
it's
more
like
it
answered
the
question.
I
was
more
talking
just
about
the
like
before
their
their
fully
baked,
like
you
know,
collaborative
review
and
yes,
a
way
to
have
a
decentralized,
a
more
centralized
way
to
to
collect
all
these
drafts
before
they're
fully
baked.
F
I
just
said
that
when
you
use
when,
when
a
working
group
uses
github,
it
also
has
to
figure
out
what
the
pace
for
posting
internet
drafts
to
the
data
tracker
is,
and
so
you
use
github
for
the
development
of
the
document,
but
still
periodically
post
drafts
throughout
the
process.
F
When
you
get
to
a
point
where
you
think
it's
solid
enough
that
a
wider
community
might
want
to
be
looking
at
it.
A
A
Okay,
so
coming
back
okay,
so
we've
already
put
the
question
for
the
repo
so
I'll
go
ahead
and
and
create
the
github.
Whenever
I
can
get
my
laptop
back
and
running.
The
second
question
is
prior
to
us
becoming
a
working
group.
Thank
you
pam
for
running
the
bi-weekly
informal
meetings,
and
I
would
still
encourage
that.
A
So
while
I
create
the
poll,
if
anybody
has
comments
or
feedback.
A
Okay,
I've
done
the
poll
before,
but
I
thought
I
could
do
multiple
choice.
A
I
can't
change
the
question
so
the
first
question.
B
Is
yeah
I'm
trying
to
describe
the
question
too?
So
I'm
sorry,
I
think
I
would
love
to
see
us
do
virtuals,
just
because
a
it
helps
us
get
everybody's
point
of
view
into
the
dock
and
I
think
it
also
just
raises
engagement.
A
B
That's
fine
plus
one
for
me
and
I'm
and
I'm
really
glad
that
you
can
run
them.
I
think
it's
better
for
for
the
official
chair
to
run
them
for
sure
I
don't
know
about
that
pam.
But
thank
you
also,
I'm
going
to
add
my
vote
to
the
last
poll
because
I
had
to
dismiss
it
to
get
on
the
to
unmute
my
mic,
so
I
couldn't
respond.
No.
A
Okay,
that's
right.
I
mean
we,
we've
got.
You
know
a
good
amount
of
interest
there
so
rather
than
me
running
a
poll
I'll
just
have
people
speak
up
if
there
are
any
objections
for
us
to
start
running
a
monthly
cadence,
and
I
can
do
a
doodle
poll
to
see
times.
A
Okay,
this
actually
went
faster
than
I
thought
it
would.
Are
there
any
orders,
other
orders
of
business
that
we
need
to
discuss?
We
actually
have
two
minutes
left.
A
Okay,
so
I
will
raise
one
janelle
and
danny
talked
about
the
paper
cuts
and
the
things
that
we
want
to
address
as
we
evolve.
The
use
we've
already
covered,
the
use
cases,
the
schema
and
the
protocol,
and
danny
and
phil
you
provided
some
of
the
work
that
you'd
like
to
see
get
addressed.
A
A
Thank
you
for
that
anything
else
that
we
need
to
cover.
A
A
Yes
and
I'll
probably
start
a
doodle
poll
to
find
a
day
in
time
recurring
day
and
time
in
which
we
can
do
the
virtual.
Oh
and
thank
you
paul
as
well.
G
I
just
wanted
to
jump
in
here
to
congratulate
everyone
on
a
successful
launch
here.
I
think
we
had
a
smooth
charting
process.
I
appreciate
kind
of
everyone's
kind
of
feedback
and
big
thank
you
to
barry
and
nancy
kind
of
with
their
leadership
to
get
us
here
to
our
first
working
group
meeting
and
it
sounds
like
we
have
a
good
plan
to
kick
off
the
work.