►
From YouTube: IETF113-SCIM-20220323-1330
Description
SCIM meeting session at IETF113
2022/03/23 1330
https://datatracker.ietf.org/meeting/113/proceedings/
A
A
B
B
Excellent,
thank
you,
roman,
as
are
you
okay,
so
welcome
to
the
ietf
for
you
for
some
of
you,
it
may
be
your
first
physical,
those
who
are
on
site.
I
think
tim
right.
No,
I
can't
see
from
here
patrick
so
welcome.
This
is
the
skim
otherwise
known
system
for
cross
domain
or
cloud
identity
management.
Working
group
with
me
now
I
have
a
co-chair.
B
Welcome
him
aaron
crucky,
so
rules
of
engagement,
I
don't
know
why.
I'm
trying
to
flip
slides.
B
Who
is
here
for
the
first
time
for
those
on
site,
I'm
presuming
those
that
are
remote,
I'm
looking
at
the
remote
may
have
already
joined
so
in
the
itf.
We
follow
some
procedures
that
are
noted
in
the
note.
Well,
in
the
interest
of
time,
I
will
not
go
through
each
of
them.
I
encourage
you
to
read
them
next
slide.
Please.
B
We
also
have
some
online
meeting
tips.
We
used
to
have
blue
sheets,
but
since
this
is
hybrid,
it's
mainly
taken
off
the
meet
echo
attendance.
So
this
is
where
I
encourage
you.
If
you
want
your
attendance
recorded
to
get
into
meet
echo,
and
there
is
a
chat
room
too,
that
you
can
get
to
through
the
little
bubble
chat
in
the
meat
echo
or
you
can
use
jabber
next
slide.
B
Please,
and
we
also
have
a
code
of
conduct
which
is
basically
treat
everyone
with
respect
you're
here
to
be
an
active
participant
and
welcome
the
participation
to
this
to
this
group.
All
right.
One
more
note,
since
this
is
our
first
hybrid
meeting
and
there
are
rules
that
the
ietf
is
trying
to
follow.
Please
make
sure
that
all
of
those
that
are
here
on
site,
please
make
sure
you're
wearing
your
masks,
because
it
is
mandatory.
B
Okay,
okay,
with
that,
we
can
go
ahead
and
get
started
with
the
agenda
bash.
B
I
believe
we
have
a
couple
of
volunteers
already
for
the
minute
takers.
So
thank
you
ned.
I
forget
who
the
second
one
is
pam,
but
you're
kind
of
partial
since
you're
covering
the
first
one,
and
so
I
encourage
I
put
the
link
here
where
the
minutes
are
being
taken
which
is
in
a
hackmd
hedgedock
link.
B
Okay,
so
on
to
the
agenda
on
our
charter,
the
first
things
off
the
gate
is
for
us
to
update
the
use
cases,
the
protocol
and
the
base
protocol
and
schema,
and
we
have
assigned
editors
for
them.
So
I've
asked
them
to
provide
updates
and
progress
on
those
we
actually
have
elliot.
Also,
who
is
focusing
on
iot?
He
will
be
speaking
to
the
use
cases
and
applicability
of
skim
for
iot,
and
then
phil
has
posted
a
draft
on
how
skim
is
a
profile
that
can
be
shared
using
the
shared
signals
and
events.
B
So
if
he
needs
my
help,
I
will
step
down
and
aaron
can
take
over
cheering
to
help
him
present
a
little
bit
of
background
about
shared
signals
and
events,
otherwise
fondly
known
as
sse,
but
he
will
be
presenting
the
skin
profiles
for
that.
So
that
is
the
agenda.
Unless
anybody
has
any
changes
or
comments,
it
is
the
agenda
bash
time
going
once
going
twice
all
right.
So
with
that
I
will
have
pam
come
up,
and
I,
of
course
let
me
bring
up
unless
you
want
to
share.
D
D
We
confirm
whether
this
is
in
general,
the
right
thing
and
whether
there's
missing
pieces
and
people,
please
feel
free
to
add
to
this
better
if
you
don't
delete,
but
if
you
do
think
something
should
be
deleted,
type
it
in
the
text
and
that
way,
we'll
we'll
reckon
you
know
we'll
cover
that.
So
just
a
little
bit
of
of
background.
D
If
for
any
of
you
who
are
not
aware-
and
many
people
are
not,
the
rfc
series
for
skim
is
actually
three
documents,
not
two.
So
there's
a
specification
called
rfc
7642,
which
is
meant
to
be
an
overviews,
use,
concepts,
use
cases
and
concepts.
However,
you
know
in
the
in
the
working
group
process
in
that
last
effort
that
that
document
really
served
a
different
purpose,
at
least
as
I
understood
it,
I
wasn't
part
of
that.
So
I
don't
want
to
speak
for
that
group,
but
the
you
know
it.
D
It
helped
to
form
what
went
into
the
protocol.
You
know
into
76
43
and
76
44,
but
it
didn't
get
upgrade
updated
at
the
end,
and
so
the
document
doesn't
cover
all
of
the
concepts
that
we
believe
an
implementer
white
might
need
to
know
in
order
to
come
in
and
understand
the
specification
and
be
able
to
implement
to
it.
So
the
goal
here,
at
least
as
I
understand
it,
is
to
describe
you
know
the
existing
rfcs
in
an
updated
internet
draft.
D
You
know
or
an
updated
draft
that
is
closer
to
the
actual
concepts
that
you
know
that
are
modern
today,
so
that
we
could
have
implementers
more
easily
understand
the
specification.
So
let
me
just
check
before
I
go
further.
Does
that
make
sense
to
everyone?
Is
there
anyone
who
wants
to
comment
on
the
goal.
D
E
Sorry,
quick
question:
if
you
want
someone,
so
one
of
the
things
that
I
get
asked
a
lot
is
open.
Api
swagger,
api
specs
for
first
game
is
that
on
the
table
have
people
talked
about
that?
I
haven't
been
following
this
closely,
but.
E
Open
api
specs
for
first
game
is
that
on
the
table,
I
I
mean
this
is
one
of
the
things
that
gets
brought
up
a
lot
right
today,
right
among
implementers
who
sort
of
look
at
scam
and
go
looks
like
a
doesn't
look
like
a
modern
api.
It
doesn't
have
the
open
api
specs
right.
I
can't
code
to
that,
because
I
have
to
write
every
all
the
interfaces
myself
right.
It
is
what
it
is
right.
So
I'm
my
question
is
whether
that's
part
of
the
part
of
the
charter
explicitly
or
we.
B
B
F
Hi,
just
this
good
good
afternoon
good
morning,
everyone
good
evening
to
those
further
east,
just
a
slightly
dissenting
point
of
view
on
open
api
and
swagger.
I'm
not
sure
that
it
actually
is
rich
enough
and
stands
up
to
some
of
these
cases
that
we'll
discuss
later
so
leaf.
I
would
be
very
interested
in
your
comments
when,
when
I
come
around
to
talking
about
iot,
okay.
B
D
Okay,
so
what
you
see
here
is
a
and
it
you
know
a
first
attempt
to
make
the
outline
for
the
use
cases
and
concepts
diagrams
or
sorry
the
you
know
the
headings
if
you
will
for
the
document,
and
so
what
they
really
represent,
are
the
concepts
that
we
actually
think
implementers
would
need
to
understand
in
order
to
understand
how
to
use
skim
right
now,
I
have
roughly
created
some
rationales
around
these.
D
As
far
as
definitions
go,
I
believe
we
probably
don't
want
to
redefine
some
of
the
things
in
the
normative
specs,
but
we
might
want
to
define
some
of
the
more
business-like
use
cases.
So,
for
example,
I
have
user
management
and
group
management
and
provisioning
listed
here
right.
So
in
the
case
where
people
maybe
not
may
not
understand,
for
example,
what
group
management
is
right
it
just
it's
just
should
be
just
enough
to
help
people
understand
that
that
term
is
being
used.
D
The
next
section
is
the
concepts
section,
and
so
I
believe
that
these
are
basic
concepts
around
provisioning
right
around.
You
know
some
of
the
industry
ideas.
So
what
is
us
you
know?
What
does
it
mean
to
have
a
start
of
authority
right?
So
this
you
know
being
able
to
say
that
a
given
entity
is
authoritative
for
an
attribute,
right
or
authoritative,
for
an
account
is
sort
of
the
idea
behind
start
of
authority
data
directionality.
So
the
idea
of
of
are
you?
Is
it
master
slave?
D
Were
you
just
pushing
things
straight
out,
or
is
it
a
bi-directional
sharing,
because
in
the
case
of
bi-directional
sharing
it
obviously
matters
on
an
attribute
by
attribute
basis
which
party
in
the
transaction
is
authoritative
for
what
so
that
one's
very
loaded
and
may
deserve
a
section
of
its
own
just
in
time
provisioning?
D
D
Identifiers
and
primary
keys,
so
in
this
case
the
understanding
of,
for
example,
why
in
scheme
we
have
to
have
an
external
id
right,
we
can
kind
of.
We
can
put
some
explanation
in
there:
api
security,
which
is
typoed
so
under
api
security.
My
thought
was
just
explaining
how
a
skim
endpoint
can
be
secured
right
just
again,
just
enough
information.
Some
of
this,
if
you
feel
like
it's
too
obvious,
please
just
add
a
comment
in
the
document
because
there's
you
know
there's
just
a
question
of
where
do
you
draw
the
line
for
what's
to
what's?
D
D
B
F
Hi
again,
I
I
think
these
are
good
things.
I
don't
know
that
they're
an
exhaustive
list,
and
I
don't
know
that
they
need
to
be
at
this
point
right.
I
think
that
the
question
you
might
want
to
be
asking
is:
does
anybody
object
to
anything
in
this
list
and
I
would
be
shocked
if
people
did,
but
you
might
want
to
add
to
the
list
as
we
develop
things
out
a
bit
more.
D
F
Network,
let
me
throw
one
example
out:
you
talk
about
data
directionality.
Putting
this
the
schema
aside
right.
Let's
talk
about
connection
directionality
right,
which
goes
to
the
protocol
right
is,
is
a
restful
protocol.
The
the
only
way
that
we
want
to
do
this,
or
are
we
thinking
in
terms
of
things
like
grpc
or
web
sockets,
where
we
might
want
to
reverse
connectivity
in
some
cases
and
I'll
give
an
example
of
that
later.
D
B
D
C
D
Great,
let's
keep
so,
let's
keep
going.
How
am
I
for
time.
D
Okay,
so
the
next
one
is
use
cases,
and
this
was-
and
I
think
that
not
the
last
interim
meeting
but
the
one
before
we
had
a
conversation
about
this.
What
does
a
use
case
mean
and
so
the
def?
The
rationale
that
I've
gone
with
here
is
what
are
the
chunks
of
of
skim
functionality
right?
That
is
that
people
might
want
to
achieve
right.
So
it's
not
a
business
use
case.
It
is
an
implementation
use
case,
and
so
what
you'll
see
here?
D
The
use
cases
include
negotiating
schema.
It
includes
search
and
query,
object,
synchronization
life
cycle
and,
and
then
massive
data
set
maintenance.
D
So
I
mean
that's
a
huge
like
those
four
things
are
massive
and
can
be
broken
down
into
subsections,
but
I
don't
have
a
sense
of
whether
they're
in
any
way
enough
right,
like
I
don't
know
that
they're
all
sufficient.
D
I
do
you
know,
I
do
believe
the
massive
you
know
that
that
management
at
scale
is
a
theme
that
we
want
to
spend
time
on
right,
because
that
includes
pagination.
It
includes
how
do
you
bootstrap
a
connection
in
the
first
place?
Eating
you
know
includes.
D
How
do
you
do
bulk
operations
all
of
that,
but
we
may
want
to
break
that
up
like
if
there's
a
smarter
way
to
break
that
into
pieces,
then
we
might
want
to
do
that
right
so
and
then
you
know
so
one
is
sort
of
a
life
cycle
that
really
the
idea
of
managing
a
given
object
in
skim,
which
is
how
do
you
create
it?
How
do
you
read
it?
How
do
you
update
it?
How
do
you
delete
it
right
and
then
the
second
one
is
now?
D
D
H
H
Hi,
I
think
you've
raised
some
really
excellent
points
here.
I
think
one
of
the
other
things
is
that
there's
some
data
that
we've
come
across
that
may
need
protection,
especially
when
it's
a
url
that's
being
transferred
like
a
url
to
a
photo.
This
has
come
up
on
occasion
on
separate
notes.
Danny
has
brought
it
up
separately,
so
there
might
be
some.
You
know:
how
do
we
deal
with
sensitive
data
in
the
data
sets
or
or
how
you
know?
How
do
you
gain
access
to
it?
As
another
item.
D
I
I
So
if
I
do
a
peek
preview
of
the
next
section
already,
I
find
actually
the
use
cases
this
way
a
little
technical.
So,
for
instance,
if
I
see
user
self-service,
I
actually
wonder,
like
you
know,
skim
in
the
general
sense
for
me
reads
like
more
like
an
enterprising
math
thing,
which
is
also
reflected
in
the
use
cases
in
the
technical
part
here,
but
self-service,
on
the
other
hand,
doesn't
seem
to
be
reflected
in
the
use
cases
so
far
or
would
probably
fit
in
in
some
way.
Okay.
D
Yeah,
that's
a
great
point:
they're.
Definitely
pretty
technical
can.
Would
you
be
willing
to
add,
put
put
an
example
in
that
you
think
would
be.
I.
D
That
would
be
great
that
this
is
the
you
know,
the
more
points
of
view,
the
better
for
sure,
fantastic,
okay.
Well,
then,
yeah,
let's
go
through
the
use
cases
I
so
I
you
know.
The
comment
I
think
is
that
that
it's
okay,
if
one
of
these
two
sections
is,
is
geeky,
but
they
shouldn't
both
be
right
like
in
some
sense,
you
want
my
hope,
for
the
scenarios
was
to
essentially
tie
some
of
the
geeky
things
together
into
end-to-end
scenarios.
That
would
resonate
with
an
implementer.
D
So
you
know,
I
think
this
is
where
anyone
who
is
implementing,
if
you're
not
seeing
a
thing
that
resonates
with
you,
then
it's
clearly
the
wrong
list,
so
so
we'll
just
quickly
go
through
those
example
scenarios.
So
the
first
one,
I
think,
is
our
canonical
example.
However,
that
just
might
reflect
my
bias,
so
it
says
so.
A
sas
app
writes
a
multi-tenant
integration
to
get
data
from
a
cloud
platform
and
I've.
D
I've
set
it
up
to
be
any
cloud
platform
so
that
I
could
highlight
how
schema
negotiation
or
how
dynamics
schema
checking
could
work.
I
again
don't
know
if
that's
realistic.
D
I
know
it's
not
a
commonly
used
feature,
but
in
some
sense,
if
we
can
talk
through
some
of
the
less
commonly
used
things,
then
in
theory
it
would
help
people
know
how
to
use
them
so
yeah,
so
that
so
in
that
case
that
we
would
write
something
nice
about
negotiating
schema
understanding
how
to
do
an
initial
download.
You
know
an
initial
synchronization
of
account
and
group
data
right
and
then
maybe
you
know
after
some
amount
of
time
perform
a
full-on
reconciliation
to
make
those
things
to
be
sure.
Those
things
match
right.
D
So
the
implication
I
have
when
I
say
reconciliation
is
that
maybe
the
service
halted
for
a
minute,
and
we
you
know
you,
you
lost
access,
so
you
weren't
sure
if
every
skim
command
was
received,
for
example-
and
I
believe
that's
I
mean
that's
a
concern
that
we've
heard
talked
about
in
the
various
meetings
right.
D
How
do
I
know
if
I've
missed
something-
and
you
know
the
difference
in
my
mind,
between
reconciliation
and
an
incremental
update
is
one
is
a
brute
force
download
and
the
other
one
is,
is
a
mechanism
for
knowing
what
the
delta
might
be
between
what
you
have
and
what
you
wish.
You
had,
okay,
so
the
next
one
is
progressive
profiling,
which
you
know
the
idea
behind
that
is,
would
be
customer.
D
D
D
That's
the
progressive
profiling
use
case
and
then
yeah
the
user
self-service
would
be
the
idea
that
you
go
manage
your
profile.
You
know
web
page
or
web
portal
and
then
that
data
then
gets
pushed
through.
You
know
through
the
chain
and
then
the
last
three.
So
the
first
one
is
a
bulk
address
change,
and
my
thought
here
is
that
it
would
be
large
enough
scale,
so
we
could
discuss
pagination,
it
would
be.
D
You
would
want
to
search
with
a
filter
right
to
only
find
users
who
have
a
building
that
matches
the
old
building
that
we're
moving
out
of
and
that
you
would
then
change
the
data
in
the
address
by
running.
You
know
a
patch
of
a
bulk,
a
bulk
update
operation
incrementally
to
change
only
those
accounts
that
have
you
know
that,
where
the
building
has
changed.
D
And
then
the
last
two,
so
we
have
no
group
management
right
now
at
all
in
the
use
cases
and
concepts.
So
you
know
getting
enough
information
there
that
people
can
understand
the
trade-offs
for
managing
massive
groups.
Is
a
big
deal.
So
this
idea
of
how
do
you
replace
a
user
or
a
member
in
a
in
a
group
that
has
a
million
users?
How
do
you
do
that
without
literally
doing
a
get
of
every
single
user
and
then
having
to
change
only
one
attribute
and
then
the
last
one
is
the
idea
of
extending
schema.
D
D
H
B
Oh
antoine
you're,
in
the
queue
hello,
oh.
A
J
How
do
you
manage
the
change
of
ownership
on
devices
is
something
that
I
think
could
be
interesting,
because
if
obviously
in
business
use
cases,
you
have
to
change
ownership
for
a
set
of
objects
from
one
person
to
another,
and
one
thing
that
is
also
important
from
my
perspective
is
to
manage
the
the
fact
that
company
fails
and
objects
are
in
the
field
for
20
years.
So
how
do
you
change
the
authority
that
manages
the
keys
and
the
trust
relationship
for
a
given.
K
J
B
Oh,
you
you
you,
I
expected
you
to
stay
in
the
queue
as
I
thought
you
might
answer,
elliott,
you're
back
on
the
queue.
F
Yeah
I
I
typed
something
into
the
chat.
I
was
going
to
say
pam
that
I
wonder
if
we
could
defer
paul
the
poll
on
this
part
of
the
discussion
until
after
the
iot
presentation,
because
you'll
probably
want
to
pick
something
up,
I'm
not
going
to
try
and
tell
you
how
to
format
every
last
thing
here
right.
I
just
think
there's
there's
going
to
be
food
for
thought.
D
B
As
elliott
says,
we
can
run
a
poll
to
give
you
guidance
if
you
still
need
it.
Okay,
because
elliot
is,
I
put
elliott
next,
because
he
also
has
a
use
case
right
to
address
the
iot.
B
F
Do
do
you
mind
if
you
share
them
that
way,
I
think
it's
probably
just
as
easy.
F
All
right,
so
I
did
up
these
use
cases
with
janelle
who
I
will
now
use
as
an
example.
Next
slide,
please.
F
Each
of
these
identities
require
credentials.
It
so
happens
that
janelle
has
a
lot
more
capability
to
prove
herself.
She
has
fingers
eyes
and
and
the
ability
to
type
and
the
the
light
bulb
doesn't
right.
There's
no
user
interface
on
the
light
bulb.
There's
no
means
for
the
light
bulb
to
to
provide
any
a
lot
of
context,
just
visually
other
than
blinking
or
something
next
slide.
F
So
this
is
a
big
problem
in
in
terms
of
just
getting
things
on
board
to
network,
we
want
things
to
seamlessly
be
able
to
use
wi-fi.
F
We
want
things
to
be
able
to
seamlessly
use
different
connection
technologies
and
for
those
people,
who've
been
paying
attention
to
the
the
side
meetings
that
we've
been
having
and
the
meetings
we've
had
in
the
iot
ops
group.
The
nice
thing
about
onboarding
standards
is
that
there
are
so
many
of
them.
F
So
the
the
key
question
is
right.
You
know
here
we
have
a
provisioning
thing.
We
know
we
need
to
provide
some
information
about
the
device
into
a
local
environment.
F
We
might
need
to
delete
all
devices
we
there
may
be
a
need
to
update
existing
devices
and
there
might
be
a
need
to
to
group
devices.
I
say
maybe
mites
you're
hearing
a
little
bit
of
hedging,
because
I
think
we're
just
getting
going
in
the
context
of
skim.
F
The
the
question
is,
why
not
use
skim,
and
so
we
wanted
to
start
with
at
least
the
concept
and
see
where
it
goes
next
slide
please.
So
this
is
a
fairly
texty
slide.
I
apologize
for
that,
but
the
key
thing
here
is
that
we
want
to
be
able
to
do
as
much
automation
of
provisioning
as
possible,
so
the
the
basic
notion
is,
you
buy
something.
F
It
should
even
be
possible
to
automate
the
transfer
of
credentials
relating
to
a
device,
and,
what's
important
to
understand
is
that
devices
need
proof
that
they're
connecting
to
the
right
place
as
much
as
the
network
needs
proof
or
an
or
an
application
needs
proof
that
a
device
is
allowed
to
talk
to
it.
F
The
industry
really
wants
a
standard
approach
to
that
and,
as
I
mentioned,
there
are
many
different
standards.
I've
listed
a
couple
of
here.
This
has
a
lot
of
implications
in
terms
of
schema
design
right
the
company.
I
work
for
cisco
used
to
have
a
slogan.
I
don't
know
if
it's
still
out
there,
no
technology
religion.
Well,
we
don't
want
to
have
to
make
choices
about
technology
at
this
level
we
want.
F
We
want
to
have
an
expansive
view
at
this
moment
in
time,
maybe
later
on
in
life,
there
will
be
consolidation,
but
that's
certainly
not
where
we
are
not
only
in
terms
of
l2
technology,
but
even
on
top
of
l2
in
terms
of
the
mechanisms
that
are
used
to
provision
things.
F
So,
as
you
see
below
right
there
to
provision
on
to
wi-fi,
you
might
use
dpp
or
you
might
find
a
way
to
use
8366
vouchers
or
you
might
find
a
way
to
do
other
means
of
provisioning
for
ble.
You
might
use
the
oob
mechanisms
that
are
defined.
You
might
do
some
weird
thing
with
pairing
keys,
which
has
been
done
out
there
and
you
might
use
fido
fdo
vouchers
for
any
of
this.
F
So
what
this
means
is
that
we
need
a
certain
amount
of
normalization
and
we
might
even
need
multiple
levels
of
normalization,
and
this
is
something
that
I
think
skim
presents
a
challenge
for
at
the
moment.
F
So,
just
to
to
put
this
in
a
more
graphic
view
right,
you
have
a
device,
it
might
have
wi-fi
capabilities,
it
might
have
802.3
capabilities,
and
you
can
just
see
this
mesh
of
different
onboarding
standards
that
might
be
used
in
these
different
cases
and
so
yeah.
Here
again,
it's
it's
no
technology,
religion-
and
this
is
sort
of
a
use
case
that
we
expect
to
have
happen
because
the
console
there
is
not
likely
to
be
a
consolidation
anytime
soon.
We
just
want
to
be
able
to
support
whatever
it
is.
F
The
device
happens
to
support
right,
and
the
key
thing
here
is:
if
we
go
down
that
right
side,
what
does
it
mean
in
terms
of
schema
for
dpp?
This
is
device
provisioning
protocol
by
the
wi-fi
alliance.
It's
public
key
that
needs
to
be
gotten
that
needs
to
be
transmitted
into
the
deployment
so
that
the
device
knows
who
that
has
the
corresponding
private
key
can
do
a
cryptographic
exchange
to
establish
trust
for
8366.
F
F
F
E
Hey
come,
this
might
be
like
a
clarifying
question
or
not
so
you
or
maybe
something
you're
getting
to
later,
and
if
so,
please
stop
me
right,
but
you
you
mentioned
that
this,
the
all
of
the
various
types
of
key
material
and
device
types
would
require
some
sort
of
update
to
scan
and
to
the
security
way.
We
do
security
first
game
right
right
now.
It's
all!
I
think
it's
specified
in
terms
of
bearer
tokens
right
and
what
I'm
wondering
is.
E
Even
if
you
have
all
of
this
complexity
on
the
devices
right,
you
have
some
sort
of
key.
Couldn't
you
like
use?
A
a
a
token
exchange.
Token
translator
approach
to
like
get
yourself
a
job
token
and
use
that
to
talk
to
the
scheme
server.
So
how
is
this
a
problem
for
scam
right?
The
fact
that
you
have
all
of
these
yeah.
F
Okay,
so
the
issue
here
right
is
we're
not
attempting,
in
the
iot
use
case,
we're
not
attempting
to
modify
the
iot
devices
themselves
right.
The
use
case
in
this
case
is
to
take
what
the
iot
device
offers
right
and
allow
the
local
deployment
to
have
access
to
the
necessary
information
in
order
to
onboard
that
device.
E
Sure
but
understood,
but
the
iot
device
has
some
key
something
right
right,
and
what
I'm
trying
to
get
at
is
exactly
what.
How
does
this
affect
skim
right,
because
all
you
need
to
to
feed
to
this
game
server
in
order
to
make
it
happy
is
a
a
bear
token,
a
jaw
token
or
something
right,
and
you
can
get
that
through
any
kind
of
mechanism
if
you
have
access
to
some
sort
of
key
material
on
the
on
the
client.
So
why
does
the
fact
that
you
have
a
a
wide
range
of
client
types
and
key
material?
F
F
L
That
trailed,
right
into
my
comment,
I
think
the
one
thing
I
would
say
as
someone
who
worked
on
the
more
of
the
non-headless
device
space,
this
is
not
unique
to
iot
and
I
think
it
should
be
positioned
as
devices
right
in
a
like.
If
you
look
at
a
directory
right,
we
store
billions
of
device
relationships
side-by-side
with
user
objects
that
are
not
iot
devices.
So
it's
really
devices
just.
L
The
general
general
information.
F
Okay,
I
think
you've
called
you've
caught
me
out
using
the
the
letters
I
o
and
t
together.
I
only
mean
devices
so
the
other
issue
here
is:
we
see
the
need
for
directionality
to
be
flipped
in
terms
of
who's
talking
to
whom
the
use
case
that
skim
was
initially
designed
for
was
you
have
an
enterprise,
that's
using
some
cloud-based
service,
and
you
want
to
mass
provision
the
cloud-based
service
based
on
the
user
base
that
that's
in
the
enterprise.
F
So
there's
a
little
bit
of
a
directionality
issue
right.
It's,
it's
not
add
this
user
to
your
service.
It's
add
this
device
to
your
inventory
is
really
what's
being
asked
here
and
it's
meant
to
be
used
in
conjunction
with
the
other
mechanisms,
then
to
onboard
the
device
on,
say
the
network
or
in
applications.
F
So,
there's
a
that
this
is
not
your
your.
There
was
an
old
commercial
in
the
1980s.
That
said,
this
is
not
your
father's
buick
well.
This
is
not
your
parent's
skin
right.
This
is
a
little
different.
First
of
all,
bootstrapping
credential
movement
is
required
in
these
cases.
In
the
cases
that
we're
discussing
it's
not
a
it's,
not
something
that
will
happen.
Sometimes
it's
something
that's
likely
to
happen
every
time
the
supplier
may
be
the
skim
client,
as
opposed
to
you,
know,
a
a
service
that
the
enterprise
calls
out.
F
Although
the
connection
and
the
the
connection
direction
is
something
that
we
do
need
to
discuss
in
a
lot
more
detail
and
the
device
identities
here
need
to
be
clearly
scoped,
especially
if
you
have
supply
multiple
suppliers,
you
don't
want
one
supplier
dinking
with
another
supplier's
stuff
just
and
this
is
directly
analogous
to
you.
F
Don't
if
you're,
if
you're
a
service
like
salesforce,
you
don't
want
one
enterprises,
provisioning
operations
impacting
in
other
enterprises,
provisioning
operations,
the
device
attributes
are
going
to
vary
based
on
the
capabilities
of
the
device,
and
this
has
schema
implications
and
there's
likely
to
be
a
desire
to
carry
other
stuff.
So
extensibility
will
remain
important,
so
examples
that
I
give
are
software
bill
of
materials,
information
which
is
very
hot
right.
Now,
it's
very
hot
topic,
both
in
the
u.s
and
europe
device
type
information
is
another
thing
that
enterprise
administrators
really
want
to.
F
F
So
this
raises
a
couple
of
questions
right.
One
question:
is
you
know,
what
is
the
relationship
between
users
and
devices,
then?
Is
there
a
notion
of
ownership
here
that
has
to
be
assigned
and
owners
in
the
con?
F
Certainly
in
the
enterprise
devices
will
have
owners
or
people
who
are
responsible
for
these
things,
but
it's
not
clear
that
the
supplier
has
to
actually
be
the
one
that
sets
that
and
in
fact
probably
the
supplier
doesn't
so
there,
but
that
doesn't
mean
there's
no
relationship.
F
The
other
question
is,
as
I
went
through
this
right,
it
became
hard
to
understand
which
schema
language
to
use
to
describe
all
this,
and
so
we
actually
did
up
our
initial
schemas
in
in
json
json
schema,
and
I
like
it
because
it's
sufficiently
rich
to
accomplish
the
job,
and
yet
it's
still
readable,
unlike
certain
schema
languages.
F
So
these
are
just
some
other
things
and
we
we
do
need
multiple
refs.
You
know
real
refs
to
be
able
to
call
in
various
polymorphic,
like
things
and,
as
I
said
in
my
initial
attempts
at
least,
and
I
won't
claim
to
be
a
swagger
expert,
but
in
my
initial
attempts
at
this
with
swagger,
I
I
absolutely
got
frustrated
and
I
did
spend
a
fair
amount
of
time
trying
to
un
to
get
through
some
of
those
frustrations
in
terms
of
having
a
fully
normalized
view.
F
M
Eric
norman,
so
from
an
information
flow
perspective,
it's
clear
that
you
know
the
the
supplier
is
actually
providing
a
bunch
of
information,
right
credentials,
etc,
right
s,
bombs
whatever,
when
I
added
that,
but
but
from
the
perspective
of
who
is
initiating
the
operation.
To
add
this
thing
to
the
system,
it's
not
clear!
That's
the
supplier,
because
yeah.
M
Yeah,
you
might
want
to
wait
until
the
the
light
bulb
showed
up.
The
pallet
of
light
bulb
showed
up
on
the
loading
bar
dock
right
until
you
do
that
right.
So
it's
not
clear
that
this
communication
thing
necessarily
changes
and
if
you
think
of
it
that
way,
then
you
have
as
a
sort
of
enterprise
accepting
this
stuff.
You
can
now
say:
okay,
I'm
gonna
put
these
things
in
these
groups
or
whatever
other
structural
things.
You
have
right
similar
to
the
way
you
handle
users
and
putting
them
in
groups
or
organizations
or
whatever.
L
Tim
microsoft,
I
think
the
difference
I
think
the
one
thing
that's
different
in
this
world
is
that
you
essentially
have
a
bootstrap
idp
and
then
the
final
idp.
So
there's
really
two
idps
at
play,
which
doesn't
traditionally
exist
on
the
user
that
that's
because
ultimately
like
dpp,
for
example,
the
supplier
needs
to
essentially
send
you
a
list
of
public
keys
map
them
to
something
and
then
drop
them
in,
and
then
at
that
point
you
can
provision
a
new
credential,
that's
specific
to
your
organization.
So
I
think
that's
the
difference
in
the
flows
as
well.
F
Yeah,
this
doesn't
talk
about
what
happens
once
the
enterprise
receives
the
information,
there's
mere
presumption
that
the
right
thing
happens,
but
I
don't
get
into
any
details
in
this
discussion
about
what
that
is
right.
As
you
said,
tim
in
dpp,
you're,
gonna,
there'll
be
a
maybe
a
dpp,
2.0
exchange
and
then
there'll
be
a
certificate
and
one
example
would
be
a
certificate
provisioning
for
that
individual
device.
Trust
anchor
exchange,
yada
yada
yada
for
83.66.
F
It
might
just
be
a
a
conversion
from
an
idev
id
to
ldap
id
communication
online.
However,
that
might
occur,
you
know,
etc,
and
so
all
that
mechanism,
I
don't
think
we
should
have
to
go
into
great
detail
in
the
skim
group.
All
we
should
be
able
to
do
is
support
the
transfer
of
the
information
so
that
it
can
get
there
enter
domain.
B
B
B
B
I
think
earlier
on
you,
I
think,
leif
asked
a
question
on
you
know.
What's
the
main
update
change
here,
and
I
think
the
use
case
may
clarify
that
the
second
step
or
or
similar
step
could
be,
you
generate
a
draft
that
shows
that
way
forward
and
a
proposal
of
what
that
might
look
like.
D
I
so
I
think
the
question
is:
if
it
is
a
draft,
I
think,
are
we
looking
at
a
schema
extent
like?
Do
we
have
to
make
that
technical,
well
leap
first,
or
do
you
just
want
to
talk
about
the
use
cases.
B
There's
two
parts
to
it
right:
it's
the
the
solution.
What
that
may
look
like,
but
then
there's
the
recognition
that
this
is
a
valid
use
case
and
that's
where
I'm
suggesting
you
know
we
could
create
a
separate
use
case
draft,
but
we
said
we
would
be
updating
it.
So
for
me
it
feels
logical,
as
you're
working
on
the
adoption
that
you
could
incorporate
the
content,
but.
F
D
B
So
I
could,
I
could
make
one
pole
and
if
that
doesn't
go
then
I
can
break
it
up
into
multiple
poles.
So
how
about?
If
I
do,
a
pull
of
is
the
I'm
gonna
short
short
it
into?
Is
the
hack
md
content
presented
by
pam
by
pam
and
elliot's
iot
use
case
sufficient,
oh
you're,
getting
back
on
the
queue
yeah.
I
just
realized
something
sorry.
D
Just
thought
of
a
reason,
so
one
thing
is:
we
have
to
decide
what
the
use
cases
and
concepts
document
applies
to.
So
if,
if
this
kind
of
iot
work
is
new
work,
then
it
might
actually
be
applicable
to
a
new
version
like
if
we're
going
to
write,
use
cases
and
concepts
for
the
existing
rfcs,
76,
43
and
44,
then
that
might
need
to
be
a
different
work
effort
than
than
the
new
work
that
this
would
constitute.
D
B
B
So
phil's
comment
on
the
chat
is
on
the
jabber.
That
elliot's
draft
is
just
a
new
schema,
but
this
is
more
on
the
revisioning
of
the
draft.
So
phil
can
you
speak
okay,
while
we're
waiting
for
phil,
collin
sure.
O
So
I
was
just
going
to
say
my
comment's,
really
simple,
which
it
seems
like
these
could
be
split
apart
as
two
different
pieces
of
work
of
the
the
device's
stuff
and
the
the
refreshing.
The
the
other
thing,
so
I
I
my
only
advice
would
be
just
everything
that
can
be
split
apart,
split
apart,
why
not,
unless
they
have
to
be
deeply
coupled
in
some
way
and
I'm
in
favor
of
course,
dealing
with
devices.
O
I
hate
the
word
iot,
but
I
like
the
word
devices,
but
it
seems
like
it
could
be
split
apart.
So
with
that,
let
me
jump
off.
B
O
F
F
Yeah,
I
I
I'm
fine
with
with
breaking
up
work
where
it
can
be
broken
up.
My
only
comment
is
simply
that
this
is
not
just
schema.
Work.
There's
connection
model.
There's
data
direction.
We
should
I
I
was
figuring.
We
would
just
cross
those
bridges
when
we
came
to
them.
B
N
It
was
pretty
much
just
to
say
once
you
get
into
my
section.
This
is
one
of
the
key
things
that
I
want
to
cover,
which
is
essentially
what's
2.0.
What
whatever
the
next
is
like:
2.1,
3.0,
okay
and
specifically,
that
use
cases
saying
that
pam
step
back
up.
Can
you
hear
me
now?
B
K
Go
ahead,
I
think
elliott's
could
be
a
new
draft.
The
big
part
of
it
is
new
schema,
and
it
also
occurs
to
me
that,
along
with
devices,
we
need
authentication,
authenticator
schema
as
well,
so
that
might
be
two
drafts
and
then
the
reverse
flow
might
be
accommodated
with
the
security
event
tokens
and
we'll
see
more
about
that
later.
B
K
K
B
I
I
just
type
he
hasn't
typed,
no,
no,
no
private,
chats.
B
Okay,
well,
he
can
vote
against
it.
Let
me
just
run
the
poll
because
we
are
well
over
time.
Sorry,
I
I
yeah,
so
we
are
well
over
time.
So
let
me
run
the
poll
and
then
we
need
to
move
forward
so
start
the
session
come
on.
B
B
B
Okay,
we're
gonna,
give
it
ten
more
seconds
and
I'll
close.
E
E
B
All
right,
but
pam
wanted
guidance,
and
you
know
ellie
presented
the
use
case.
So,
okay,
so
pam,
you
have
rough
consensus,
julie.
You
have
good
consensus
24.,
so
what
you
can
do
is
we
can
give
you
a
public,
that's
not
adopted
yet
in
the
skim
working
group
and
you
can
put
stuff
there
and
you
and
elliot
can
move
forward.
Okay
with
that,
we
are
running
well
over
danny
and
janelle
protocol
and
schema.
B
Did
you
want
us
to
share
the
slides,
or
did
you
want
to
use
present
them
yourselves?.
G
N
B
Yeah
I
had
the
same
problem
on
monday
and
I
was
in
the
same
room.
N
Okay,
I'm
going
to
give
sharing
a
shot.
N
Okay,
cool
so
yeah,
my
name's,
danny
zolner.
I
also
have
with
me
janelle
allen
and
we
are
the
currently
like
nominated
for
tribute
editors
for
schemas
and
protocols.
Next
slide.
Please.
N
Oh
wait,
I'm
presenting
so
I
I
have
to
do.
N
The
next
slide
bear
with
me
so
yeah,
just
a
quick
sort
of
run
through
things,
so
the
current
charter
that
we
have,
I
provided
a
link
and
one
of
the
things
that's
been
proposed
and
discussed,
especially
I
believe
in
itf-12,
but
also
in
the
interim
sessions
that
we've
had
in
between
is
the
topic
of
progressing
skim,
2.0
standard
from
proposed
standard,
which
is
where
it
sits
today
into
internet
standards,
sort
of
its
final
form
and
there's
sort
of
a
floating
question
of.
N
Is
there
significant
value
there
like
do?
Do
we
polish
up,
scam,
2.0,
put
a
bow
on
it
finalize
it
or
do
we
just
sort
of
leave
it
as
proposed
standard
and
work
on
2.1
or
3.0
or
whatever
we
may
call
it?
N
I
think
I
refer
to
it
as
v
next
in
most
of
the
remainder
of
the
deck
and
then
there's
another
floating
question
of
what
level
of
error,
correction,
clarity,
enhancement,
etc
is
permissible
without
issuing
a
new
version,
an
rsc
series,
it's
sort
of
like
what
is
it
the
ship
maker's
problem,
where,
if
you
replace
all
the
wood
on
a
boat,
one
board
is
high
at
what
point
does
it
stop
being
the
same
boat
like
how?
How
much
can
we
change
it
with
actually
still
remaining
2.0?
N
And
to
that
I
strongly
look
towards
the
chairs
for
guidance,
but
I
want
to
just
just
you
know
put
that
out
there.
It
is
one
of
the
big
problems
that
we're
trying
to
wrap
our
heads
around
today
of
how
much
of
the
charter
that
we
have
today
yeah
how
much
how
much
of
it
is
a
charter.
Today,
sorry
thoughts
are
getting
doubled.
How
much
of
it
is
doable
today
versus
in
in
a
future
version,
and
then
so
the
try
to
propose
a
lot
of
schema
and
protocol
enhancements.
N
Virtually
all
of
those
are
too
big
to
be
part
of
2.0.
If
you
know
we're
looking
at
that
as
error,
correction
and
whatnot,
so
there's
that
a
question
that
I
would
like
to
be
discussed,
which
is
do
do
we
go
towards
2.1
3.0,
or
do
we
finalize
2.0
and
just
throw
out
a
litany
of
extensions
to
cover
the
you
know,
20
plus
topics
that
the
charter
would
like
us
to
cover.
N
So
I'm
gonna
move
to
the
next
slide
and
just
quickly
here's
all
of
the
work
related
to
schemas
and
protocols.
That's
listed
in
the
charter,
it's
available
at
that
link
that
I've
also
previously
shared
I'm
not
going
to
spend
too
much
time
on
this.
But
it's
everything
from
you
know:
sort
of
clarifying
and
expanding
on
usage
of
things
like
external
id
changes
and
expansion
to
account
state.
N
You
know
completely
new
concepts
like
multi-value,
query,
filtering
and
paging,
and
you
know
a
couple
of
other
things
as
well,
and
it's
just.
There
is
no
conceivable
way
that
we
do
this
inside
of
2.0.
So
I
really
want
to
get
like
a
a
group
consensus
and
figure
out
how
we
proceed
forward
and
one
quick
thing
to
share.
N
Also,
we
have
set
up
with
the
chairs,
help
a
a
github
organization
and
we
have
a
couple
of
repos
containing
the
current
xml
versions
of
the
the
schema
and
the
protocol
apis
and
we're
going
to
be
working
on
sort
of
tracking
the
various
issues
as
github
issues
to
easily
facilitate
certain
discussion
and
propose
solutions
throughout
that
which
we,
we
will
still
also
then
recap
and
bring
things
for
discussion
over
the
skim
mailing
list
as
well
and
there's
just
a
couple
notes
on
the
plan
of
how
we
proceed
here.
N
I'm
not
gonna
linger
on
this.
Unless
anybody
has
questions
I'll,
send
this
out
via
email
as
well.
N
So
I
guess
one
of
the
the
big
topics
and
I'm
trying
to
get
through
this
quickly,
because
this
is
a
discussion
really.
I
like
the
goal
here
is
feedback,
one
of
the
things
that
if
we
were
to
push
2.0
to
be
internet
standard,
we
would
need
to
do
a
survey
on
the
implementation
of
concepts
and,
honestly,
even
if
we're
not
pushing
it
for
2.0,
there's,
probably
still
value
in
figuring
out.
Are
there
parts
in
the
survey
are
scott?
Sorry?
N
Are
there
parts
in
the
in
the
2.0
standard
in
the
schema
or
protocol
that
just
aren't
adopted?
You
know
either
at
all
or
widely
or
in
an
interoperable
manner,
and
so
we'd
like
to
get
feedback
on
that
and
sort
of
start
the
ball
rolling
on
either
cutting
or
resolving
issues
with
certain
parts
of
the
of
the
existing
standards.
N
I'm
sorry,
let
me
go
back
a
second
and
so
yeah
just
a
couple
of
examples.
There
you
know
basic
auth
in
the
in
2.0.
You
know
skim
was
you
know.
Skin
people
was
published
in
2015
at
this
point
in
time.
Just
you
know,
I,
I
think
it's
sort
of
inexcusable
to
use
basic
auth.
Unlike
the
you
know,
an
internet
facing
implementation
of
scam
versus
something
like
a
bearer
token
or
some
form
of
oauth.
N
There's
also,
you
know
passwords
if
the
the
common
law
of
the
land
nowadays-
and
you
know,
industry
standard
is
federation
between
you
know
in
idp
and
something
else
is
their
value
in
the
core
schema
at
least
containing
passwords.
I
I'm
aware
that
there
are
certain,
mostly
we'll
call
them
like
legacy
concepts
where
you
know
you
might
be
provisioning
into
some.
N
You
know
like
a
mainframe
or
something
where
you
must
provide
a
password,
and
that
makes
sense,
but
whether
or
not
those
are
sort
of
internet
facing
and
whether
they
line
up
with
the
sort
of
the
the
goal
of
this
game
standard
is
up
in
the
air.
If
we
look
at
the
name,
it's
the
system
for
cross
domain
identity
management-
god,
I'm
yeah,
I'm
not
my
game
today.
N
Sorry
and
there's
other
things
like
photos
as
well,
which
we've
discussed
in
the
interim,
which,
from
a
sort
of
cross
cloud
standpoint,
are
hard,
if
not
impossible,
to
implement
due
to
security
for
concerns.
N
So
just
you
know
very
it's
a
non.
It's
not
an
entire
list,
the
goal.
I
again
you
know
we
want
other
people
to
get
involved
rather
than
just
being
janelle,
and
I
trying
to
figure
things
out
on
on
the
feminism
protocols.
N
N
So
the
title
of
it
is
xy
problem
and
I
have
a
bullet
that
says:
are
some
concepts
useful,
but
potentially
outside
the
scope
of
what
the
skim
standard
aims
to
address
with
a
sub
bullet
that
says
some
decisions
on
cuts
and
edits
to
2.0
should
wait
until
after
use
cases
are
revised
and
then
there's
a
question
that
sort
of
came
up
when
when
pam
and
elliot
were
speaking
as
well,
which
is,
is
the
work
to
revise
use
cases
targeted
at
2.0
at
the
v.
B
Now
I
was
going
to
say:
phil
has
been
on
the
queue
he
just
dropped
from
the
queue
I
didn't
know.
If
he
had
a
comment
on
your
slide.
B
O
N
Yeah
and
so
finally,
just
bringing
it
to
a
recap,
I
really
just
landed
it
to
the
question
or
the
questions
that
are
floating
and
at
this
point
I
anybody
who's
here.
I
I
want
opinions
because
I
I've
got
my
own,
but
this
you
know
the
skin
standard
is
not
mine
and
I
don't
like.
I
want
feedback,
please
on
what
my
people's
thoughts
are
on
really
each
of
these
big
points.
P
I
am
in
the
queue
good
morning,
so
my
perspective
on
this
is
that
we
should
do
the
right
thing
so
as
implementers,
I
think,
looking
at
a
long
series
of
extensions
can
get
really
complicated
and
so
the
more
we
incorporate
into
the
course
back,
I
think
the
better
likely
the
better
the
likelihood
is
that
we'll
have
full
compliance
with
the
the
intended
outcome
of
this
working
group.
In
future
implementations.
H
I
know
I
just
didn't
want
to
interrupt
anybody.
One
of
the
things
when
we've
been
exploring
things
is
we
look
at
the
core
schema
and
there
is
a
lot
in
the
core
schema
comprising
an
identity
and
there's
a
lot
of
questions
that
come
into
play
regarding
that
like,
for
instance,
it
may
be
valid
to
keep
passwords
around
for
some
use
cases,
but
perhaps
driving
that
into
its
own
extension
or
sub-schema
of
the
core
and
and
same
goes
for
some
of
the
other
schema
elements.
H
But
if
we
start
breaking
apart
like
that
and
take
them
out
of
the
core,
then
you
know
what
does
that
mean
for
those
who've
implemented
2.0?
That
sounds
like
more
like
a
3.0
thing,
which
is
kind
of
where
some
of
the
thoughts
that
we've
had
have
gone
when
we've
been
discussing
like
how
do
we
just
put
a
bow
on
2.0
like?
Should
we
just
do
light
edits,
say
this?
Is
it
and
and
give
2.0
the
credibility?
H
Has
lots
of
people
have
implemented
it
as
it
is
with
some
clarifications
fixing
the
errata,
some
really
clear,
use
cases
of
of
how
to
work
with
the
spec
as
it
is
or
just
say,
that's
it.
This
was
a
great
standard.
It
served
its
purpose
for
its
time,
like
let's
drive
forward
on
3.0.
H
Q
N
N
N
You
know
the
client
communicates
the
url
and
the
server
would
then
have
the
url
on
hand
to
go,
pull
the
picture
from
there's
problems
there,
though,
of
how
does
the
client,
especially
when
it's
over
the
internet,
rather
than
on
an
intranet
where
there's
you
know
firewall
rules,
and
you
know
just
security-
is
a
little
more
easily
managed.
N
How
does
the
the
server
come
back
and
retrieve
that
image
to
use,
and
it's
it's
in
its
own
service
sort
of
like
hot,
linking
to
it
and
just
pulling
the
image
every
time
it
needs
to
be
loaded,
isn't
necessarily
feasible
and
just
from
being
my
own
experiences,
which
color
my
you
know.
My
thoughts
here
any
like
fast
implementers
that
I've
talked
to
who
have
service
providers
who
want
to
consume
pictures.
N
They
want
to
be
able
to
receive
a
picture
and
store
it
in
their
service
so
that
there's
no
meaningful
latency
when
they
go
to
show
it
in
somebody's
profile
and
so
that
whole,
like
approach
to
pictures,
isn't
something
that's
really
doable
today
with
what's
described
in
the
skim
spec.
I
Sort
of
a
question
sort
of
a
comment
so,
with
respect
to
that
question,
now
how
to
deal
with
making
this
skim
2.1,
making
it
skim
3.0.
Something
like
that.
My
observation
is
somebody
digging
into
skim
very
recently
and
discovering
skim
very
recently
is
that
there
is
quite
some
wide
adoption
actually
in
a
quite
specific
use.
Cases
like
identity
providers,
provisioning
star
services,
and
this
is
actually
very
widespread.
So
a
lot
of
services
are
supporting
that.
I
At
least
I
didn't
easily
stumble
upon
that,
while
introducing
my
or
myself
to
to
skim
before
so
my
question
a
little
bit
is
also
is
there
some,
I
mean,
which
I
think
would
be
helpful
for
casting
that
decision
about
how
to
go
forward
with
a
with
the
update
of
the
standards
is
basically
does
anybody
actually
have
an
overview
of
which
different
domains
exist
and
use
cases
are
already
implemented,
and,
if
not,
that
might
be,
I
think
very
helpful
in
order
to
cast
that
decision.
B
Yeah
pam,
I
want
to
say
in
the
buff
we
had,
we
had
a
cursory,
it
wasn't
complete,
so
we
could
try
and
bring
that
up
in
an
introduction,
perhaps
in
a
virtual
interim
yeah,
to
help
with
that
go
ahead.
Pam.
D
Yeah,
I
think,
there's
some
interesting
question
about
so
the
use
cases
that
we
did
see
that
were
new.
Often
they
were
characterized
as
schema
extensions
and
they
included
hr
hr
use
cases.
We
can
definitely
go
back
and
look
at
what
those
are,
but
I
think
we
have
a
starting
point
for
that
kind
of
overview.
If
you
will
that.
B
There
were
two
main
categories:
there
was
the
updating
of
the
protocols
for
scaling,
scalability
and
then
from
a
schema.
It
was,
I
think
we
used
the
word
modernizing
relating
to
the
the
fields
that
an
hr
database
might
have.
B
But
as
pam
said
we
can,
you
know
we
can
bring
those
back.
They
may
actually
be
noted.
Also,
oh,
no.
It
wasn't
about
never
mind,
but
we
can
bring
those
back
up.
Your
point
is
well
taken
thanks,
so
danny
I
I
should
channel
philip's
response
to
your
response
to
the
uri
and
photos
is
there
was
discussion
before,
but
maybe
it
was
on
a
virtual,
but
anyway
there
was
consensus
from
before
that
implementers
wanted
to
choose
how
to
solve
the
problem
as
opposed
to
addressing
so
that
was
phil's
feedback.
N
From
my
own
experience,
there's
a
lack
of
adoption
for
specifically
photos
because
there
is
no
standardized
way
to
approach
it.
So
it's
really,
I
think,
a
question
of
is
the
current
photos.
Just
you
know,
sticking
with
this
fleshed
out
enough
to
actually
be
adopted
at
all,
and
I
suspect
that
the
answer
is
yes
based
off
of
what
phil
said
in
the
previous
interim.
N
However,
this
in
turn,
I
think,
brings
us
to
the
third
bullet
point
on
the
slide
that
I'm
showing
right
now,
which
is
even
though
it's
not
really
milestone
related,
but
for
things
related
to
the
scheme
and
protocol,
should
we
like
try
and
get
the
use
cases
to
cover
things
like
to
to
cover
essentially
to
describe
the
problem
that
we're
going
to
solve
because
well
the
the
description
of
how
to
do.
N
Photos
in
skim
today
works
in
certain
scenarios,
but
it
doesn't
really
work
in
like
the
the
internet,
the
sassy
scenario,
which
is
yeah.
I
don't
know
why
there's
an
echo
but
so
yeah
the
photos,
work
somewhere,
but
does
it
work
for
what
skim
is
actually
trying
to
solve?
I
guess
that's
the
question
and
this
then
expands
out
to
a
whole
bunch
of
other
things
like
passwords.
B
B
N
I
I
think
some
of
it
can
be
done
in
parallel.
We
if
I
still
need
to
lan
like
figure
out
and
do
a
vote
or
whichever
on
how
we
proceed,
whether
it's
the
2.0
or
a
v
next,
just
so
that
we
can
have
consensus.
So
I'd
like
to
I
just
figure
out
how
we
were
that
and
put
it
up
to
a
vote
here
or
a
show
of
hands
on.
N
E
So
I
think
the
current
version,
the
current
published
specs
are
2.0.
That's
how
they've
been
described
for
a
long
time
right.
So
I
think
whatever
this
is
is
2.1
or
3.0,
and
I
would
I
tried
to
I
split
it
up
into
individual
github
issues
and
have
a
discussion
I
mean,
maybe,
starting
with
the
you
know,
any
identified
fundamental
stuff
that
actually
changes
the
protocol.
E
You
know
in
a
way
that
breaks,
maybe
breaks
backwards,
compatibility
compatibility,
but
it's
certainly
sort
of
there
are
stuff
here
that
maybe
is
challenges
underlying
assumptions
from
from
the
point
of
view,
client
implementers
and
that
should
probably
go
first
in
the
discussion
queue
to
figure
out
sort
of
you
know
what
their
what
the
implementation
community
is
thinking
about
it.
That's
I
guess
what
I
would
do
and
and
then
you
know
yeah.
There
are
plenty
of
extensibility
hooks
in
scheme
there.
There's
no
need
to
push
everything
into
the
core
brother
goal.
F
Thank
you.
I
largely
agree
with
lei
over
what
he
just
said
that
that,
having
like
an
issue
tracker
will
will
help.
You
know
the.
F
F
The
question
is:
where
do
we
need
major
change
and
what
is
incremental
and
those
can
even
proceed
in
parallel
in
some
sense,
if,
if
that's
what
people
want?
What
where
I
I
slightly
disagree,
and
only
only
slightly
disagree
with
life,
and
it's
such
a
small
matter
that
I
probably
shouldn't
even
mention
it,
but
I
I
think
it's
important
to
understand
core
protocol
implications.
What
needs
to
be
there,
what
in
the
core,
what
what
needs
to
be
in
the
in
extensions?
F
I'm
not
sure
that
that
that
skin
is
perfectly
extensible.
I
think
a
little
bit
of
elaboration,
in
whatever
follow-on
specifications
happen,
should
be
a
lot
clearer
about,
for
instance,
how
do
you
fully
normalize
right,
because
there
there
isn't
a
lot
of
example
in
that
in
skin,
and
it's
something
that
wasn't
really
con.
I
mean
I
think
it
was
contemplated,
but
at
the
time
that
skim
first
came
out
right,
it
needed
to
get
done
because
people
were
waiting
for
it.
F
H
Yeah
well,
thank
you
for
your
feedback,
everybody
who's
provided
it
and
I
think
yeah
we're
just
you
know
trying
to
do
what's
right
for
the
standard,
the
standard
as
it
exists
and
as
moving
forward
there's,
certainly
areas
that
are
interesting
too,
and
it
comes
up
with
regards
to
the
identity
providers.
You
know
I
think,
prior
when
skim.2.0
was
coming
about
as
a
standard.
There's
this
notion
of
the
single
authoritative
source
of
identity
data,
pushing
that
data.
H
The
service
providers
can
actually
choose
experiments
that,
of
course,
you
must
provide
to
them
and
in
part
of
our
discussions
was
well.
Should
we
have
the
service
providers
report
back,
which
attributes
they're
willing
to
accept
from
a
from
a
client
so
that,
because
they
may
not
take
them
all,
they
might
not
take
the
password,
for
example,
or
they
also
may
not
take
any
of
the
other
attributes.
H
If
they're,
you
know,
if
they
feel
that
they're
the
authority
of
the
phone
number
and
not
the
client,
they
might
say
well
we're
going
to
ignore
that
phone
number,
and
so
then
this
provides
a
notion
of
well.
Then,
if
you
send
that
back
and
the
client
is
expecting
to
then
read
back
that
data
and
match
it
with
the
data,
they
have
and
there's
a
mismatch,
and
how
would
you
reconcile
that
or
how
would
the
client
reconcile
that?
H
Or
do
you
move
to
more
of
a
notion
of
a
composed
identity
profile
which
might
have
data
from
multiple
sources
comprising
the
true
identity
of
that
individual
was
some
of
the
other
things
that
have
come
up
and
it
comes
up
to
when
we
talk
about.
You
know
who's
the
authority
on
the
device
side
for
that
device,
data
and
things
like
that
as
well.
So
there's
some.
There
is
definitely
some
overlap
in
that
area
and
I
think
it'd
be
great
to
get
feedback
on
those
thoughts.
N
Okay,
well
yeah
we're
at
the
end
of
the
slides.
So
I
think
at
this
point
we'll
sort
of
try
and
cut
apart
what
can
be
accomplished
without
waiting
for
use
cases.
What
we
think
should
reasonably
we
waited
for
to
make
sure
that
it
aligns
with
these
cases
like.
Is
it
a
solution
to
a
problem
that
you
actually
want
to
solve
and
then
we'll
we'll
work
forward
from
there.
B
B
So
next
up
is
phil
and
he
was
going
to
present
skim
events.
There
is
a
draft
up,
but
he's
having
audio
issues.
He
did
send
me
a
transcript,
but
my
last
question
is
phil:
do
you
want
to
give
it
one
last
try
or
do
you
want
me
to
just
read
the
transcript.
B
Okay,
if
you
could
share
the
slides,
great
okay,
so
thanks
everyone
for
coming
out
today.
This
presentation
is
about
security
events
profile
that
I
recently
published
as
an
individual
draft.
What
are
security
events
and
what
are
the
propos?
What
are
we
proposing
for
skim?
You
may
be
wondering
what
a
profile
is.
I
use
that
word
to
indicate.
We
are
not
space
specifying
something
new,
we're
simply
taking
an
existing
set
of
specs
and
profiling
them
for
use
within
skin.
B
Oh,
that
was
the
agenda
slide.
Next
slide.
Sorry,
okay,
what
is
a
security
event
token?
It
is
a
specialized
use
of
jots,
profiled
or
exchanging
secure
event.
Messages
between
systems
in
2015,
several
groups
inside
and
outside
of
the
itf,
were
all
planning
to
do
the
same
thing,
morteza
and
serie
of
cisco
and
then
co-chair
of
skim,
william
dennis
of
google
and
myself
not
me.
Phil
hunt,
then
of
oracle,
put
together
a
draft
specification
in
the
skim
working
group
that
extended
jots
for
skim
events.
B
This
draft
was
submitted
to
address
skim's
charter
item
of
being
able
to
send
trigger
events
between
systems,
because
so
many
groups
were
thinking
about
the
same
thing.
There
was
quick
agreement
for
oauth
2,
open
id
connect
and
risk
incident
sharing
groups.
I
forgot
to
reset
this
risk
incident
sharing
groups
to
work
together
on
a
common
spec
which
later
became
set
or
in
the
itf
rc
8417
under
the
working
group,
sec
events.
B
B
B
B
What
often
does
require
coordination
is
a
need
to
link
life
cycles
of
resources
across
domains,
for
example,
a
user
disabled
in
domain,
a
needs
to
ultimately
be
disabled
in
domain
b
in
an
event
system.
The
event
receiver,
which
has
full
knowledge
of
its
local
domain,
is
able
to
take
an
external
event,
request
more
information
if
needed,
and
then
reconcile
that
event
to
determine
what
local
action
should
be
taken
if
any
next
slide.
B
I
hope
to
have
a
new
library
shortly
with
some
convenient
builders
for
java.
Shortly
under
the
sec
events
working
group,
two
basic
delivery
specs
were
defined.
The
set
push,
delivery,
rc
8935
specification
allows
a
publisher
to
post
an
event
to
a
registered
web
callback
endpoint
and
the
set
pulling
delivery.
Spec
rfc
8936
allows
a
receiver
to
initiate
http
requests
to
retrieve
new
events
to
enable
real-time
delivery
set
polling
also
defines
use
of
http
long
polling.
B
One
aspect
of
these
drafts
is:
they
do
not
require
the
set
event
publishers
to
guarantee
long-term
ability
to
recover
events.
Instead,
the
specifications
require
the
receiver
acknowledge
each
event
received
after
the
receiver
has
securely
validated
and
secured
a
received
event.
When
an
event
is
acknowledged,
the
publisher
is
free
to
forget.
B
B
B
Fortunately,
the
set
delivery
specs
do
allow
for
longer
event,
recoveries.
They
just
don't
mandate
it
all.
This
brings
me
to
the
issue
of
how
can
we
implement
this?
Is
there
something
that
manages
event
feeds
and
implements
the
set
transfer
methods?
For
that
I
want
to
turn
to
the
presentation.
Oh
crap
he's
turning
the
presentation
over
to
nancy.
That's
me,
as
I
can
share
some
of
the
work
that
we're
doing
with
the
cisco
duo
team
and
it
is
open
source.
So
next
slide,
please,
okay,
so
basically,
we've
so
hard
talked
about
the
sets.
B
There
is
a
new
working
group
in
the
open
id
foundation
called
the
shared
signals
and
events.
It
is
a
framework
that
defines
an
api
or
interface
if
you
will
and
I've
put
the
link
in
there.
That
does
leverage
the
use
of
the
delivery
streams
using
sets
using
the
set
push
and
the
set
pull
to
manage
effectively
these
streams
that
phil
has
been
mentioning.
B
There
are
currently
two
schemas
that
are
defined
in
their
risk,
isn't
quite
going
through
last
time,
if
I
recall,
but
the
one
that
has
been
approved
is
the
continuous,
continuous
authentication
and
evaluation
protocol
fondly
known
as
as
cape,
and
so
what
I
listed
in
the
table
is
the
different
event.
Types
that
are
defined
in
each
of
the
schemas
and
cape
is
the
one
that
the
working
group
is
mainly
latching
on
next
slide.
Please-
and
so
I
I
think
I
lost
the
link
there,
we
do
have
cisco,
did
implement
a
reference
implementation.
B
So,
if
you're
interested
to
see
how
it
gets
used,
the
link
is
shared
signals,
guide,
dot,
io
and,
if
you
guys
are
interested,
did
I
put
it
in
there?
Oh
thank
you.
I
thought
I'd
put
it
in
there.
It's
there.
Basically,
the
flow
of
the
flow
that's
defined
and
the
shared
signals
and
events
are
in
steps.
Two
and
three
that
are
being
described
there.
B
The
one
and
two
is
basically
an
endpoint
doing
the
service
request
to
a
relying
party
and
then
the
actual
meat
of
getting
the
the
event
of
I'm
a
security.
So
more
like
a
security
event
like
in
cape,
your
authentication,
token
has
been
revoked
is
where
that
registration
and
pull
happens
in
step
two
and
three:
that's
the
shared
signals
and
event
work,
and
then
the
actual
enforcement
or
remediation
action
is
in
step
four.
B
So
this
is
just
like
the
stratospheric
view
of
what
sse
does
we
wanted
to
give
you
that
introduction,
so
that
you
would
know
about
it
as
we
now
bring
it
back
to
skim
next
slide?
Please,
and
so
the
major
use
case
here
that
has
been
talked
about
since
the
beginning
of
skim,
keeping
resources
between
skim
servers,
coordinated
and
or
in
sync
the
specca
that
identifies
two
variations.
B
The
idea
is
that
coordination
messages
be
sent
to
enable
each
skim
server
to
stay
in
sync
with
the
rest
of
the
domain,
with
some
minor
exceptions,
an
event
message
simply
repeats
the
skim
protocol
request
that
is
received.
It
contains
all
the
data
necessary
to
process
the
request
as
if
it
were
received
via
rc
7644.
B
The
base
scheme-
okay,
coordinated
provisioning,
allows
the
receiver
to
know
that
a
publisher's
resource
has
changed
what
the
type
of
change
was
and
the
attributes
changed.
In
this
case,
a
set
does
not
contain
any
raw
data,
except
for
the
id
and
or
the
external
id
which
already,
which
are
already
shared
between
domains.
B
When
a
receiver
gets
an
event,
it
can
mark
its
own
resource
as
stale
and
take
action.
For
example,
after
receiving
an
event,
the
receiver
can
perform
a
skim,
get
request
from
the
publishing
service
provider
to
see
the
current
resource
representation
and
the
receiver
can
then
do
a
reconciliation
of
the
change
resource.
B
The
reason
for
this
two-step
approach
is
that
it
avoids
sharing
of
confidential
information
in
scenarios
that
may
involve
out-of-band
relays
or
otherwise,
more
importantly,
if
the
domains
care
about
different
attributes,
a
lot
of
data
may
simply
not
be
necessary,
because
replication
is
not
the
primary
goal
in
coordinated
provisioning
mode.
The
receiver
only
needs
to
act
when
an
attribute
it
cares
about
has
changed.
B
B
We
could
have
sent
filtered
data
in
the
event
for
each
feed
subscriber
kind
of
like
a
limited
dvr,
but
I
believe
this
creates
a
number
of
problems,
one
more
event.
Processing
is
events
have
to
be
specialized
for
every
receiver,
and
also
the
publisher
has
to
know
what
data
the
receiver
is
actually
interested
in.
It
has
to
have
more
awareness
of
the
receiving
domain.
B
B
B
This
two-step
approach
treats
events
as
triggers
to
take
a
future
action
or
to
reconcile
the
differences
between
the
providers
next
slide,
so
security
yeah
security
signal
events
have
been
talked
about
by
the
skim
working
group
prior
to
the
risk
or
the
shared
signals
group
formation.
In
essence,
these
are
higher
level
events
that
draw
conclusions
based
on
skim
schema.
B
B
The
ones
used
in
the
shared
signals
are
somewhat
more
complex,
allowing
different
subject
identifiers
to
be
used.
I
think
we
need
to
explore
these
events
more
thoroughly
thoroughly
to
decide.
This
skim
should
define
its
own
signals
or
reference.
The
external
specifications
like
sse
next
slide,
because
skim
is
actually
a
profile
of
http.
B
B
B
B
Event,
of
course,
is
a
claim
kind
of
like
skim
uris.
The
events
claim
is
a
json
structure
carrying
event
uri
attributes
against
which
a
json
object
can
be
attached
in
the
skim
event
profile.
Each
event
uri
contains
the
transaction
details
of
the
event
inside
an
event
schema
object.
We
define
four
attributes
for
skim.
An
id
external
id
attributes
and
data
id
and
external
id
are
obvious,
attribute
lists
modified
attributes
when
raw
information
is
not
to
be
shared,
whereas
data
is
used
to
pass
the
raw
skim
request
eg
for
replication.
B
B
In
many
cases,
the
receiver
need
not
parse
any
other
information
other
than
the
event
type
uri
next
slide,
please,
okay,
subject
identifiers
before
I
show
you
what
an
event
looks
like
I'd
like
to
talk
about
the
sec
event.
Subject:
identifiers
draft,
that's
the
draft
that
the
ssc
work
uses
by
the
way.
This
draft
defines
many
different
types
of
identifiers
that
can
be
specified
like
email,
ip
address,
telephone,
etc.
B
B
B
B
B
When
resources
are
created,
there
is
already
agreement
on
common
identifiers
two
in
a
case
where
a
skim
client
has
forgotten
an
id.
The
event
provides
a
skim
uri
for
the
resource
in
the
event
and
third,
the
event
receiver
can
finally
perform
a
skin
get
to
obtain
any
additional
claim.
It
needs
to
locate
the
corresponding
local
entity.
B
I
recommend
against
using
the
skim
subject
identifier
draft,
as
this
would
open
up
skim
receivers
to
having
to
process
identifiers,
not
typical
of
skin.
This
optionality
would
decrease
interoperability
in
order
to
align
with
a
problem
domain
with
unresolved
standard
agreements
on
identifiers
next
slide.
B
B
This
attribute
corresponds
to
the
original
skim
request
when
this
event
is
done
for
coordinated
provisioning,
the
data
is
not
provided,
and
instead,
attributes
is
provided
with
a
set
of
attributes
issued
note
that
the
create
event
actually
includes
an
id,
whereas
the
skim
create
request
does
not.
Why
well
remember
set
is
an
event
that
has
already
happened,
so
the
skim
create
event
shows
the
id
that
was
actually
assigned.
B
Okay
next
slide,
in
addition
to
repeating
the
skim,
crud
events,
there
are
a
few
other
events
that
are
important.
The
feed
events
are
about
a
learning,
a
receiver
about
a
new
entity
that
they
may
not
be
aware
of.
This
happens
when
the
user
population
in
one
domain
is
a
subset
of
another.
For
example,
users,
provision
to
sfdc
might
be
users
in
azure
that
may
have
the
role
crm
when
sally
has
the
entitlement
of
crm
added
sfdc
is
alerted
of
that
the
sally
resource
is
now
part
of
the
fiend.
B
B
These
are
different
in
that
the
audience
of
the
event
may
not
be
a
skim
system,
client
or
provider.
Finally,
as
previously
discussed,
the
async
response
event
is
also
included
next
slide
in
the
event
profile.
I've
described
a
couple
different
delivery
mechanisms
in
part,
because
I
felt
it
important
to
describe
how
events
might
be
exchanged
and
how
it
might
be
managed.
Looking
at
this,
I
saw
two
road
categories
for
delivery,
bus
based
systems
and
point
point
systems.
B
B
Many
companies
may
have
large
investments
and
message
buses
in
the
skim
case,
they're
useful,
because
you
don't
have
to
describe
complex
hierarchies
of
server
interconnects,
that
mandate
master
servers,
all
barriers
to
global
infrastructures
and
scale.
Instead
message,
buses
become
convenient
for
a
number
of
reasons.
A
bus
can
implement
infinite
event
recovery
if
desired.
B
Each
skim
server
only
connects
to
one
bus
rather
than
each
other.
The
bus
takes
care
of
delivery,
recovery
and
fault
tolerance
having
each
server
connect
to
one
bus
rather
than
each
other
dramatically,
simplifies
configuration
credential
management
and
the
number
of
event
transfers.
Finally,
a
bus
can
also
act
as
an
auditable
record
for
changes
in
systems
over
time.
B
Still
with
all
those
benefits.
When
we
go
cross-domain,
we
want
to
control
the
flow
through
a
limited
set
of
connections
or
gateway.
In
this
case,
ssc
becomes
advantageous
next
slide.
So,
what's
out
of
scope,
because
events
are
statements
of
fact
of
what
has
occurred
rather
than
commands,
the
spec
doesn't
prescribe
what
a
receiver
must
or
should
do
other
than
the
message
itself.
B
The
breakthrough
of
set
is
to
avoid
prescriptions
and
to
focus
on
triggers
or
signals
that
enable
independent
action,
because
there
are
many
delivery
systems.
The
spec
will
only
talk
about
the
basic
mechanisms
and
any
privacy
and
security
considerations
that
flow
from
those
systems.
Thanks,
okay,
I
was
slower
than
phil.
Sorry,
we
have
five
minutes
before
we're
done.
We
can
take
comments,
questions
so
tim.
L
Tim
capela,
microsoft
and
just
disclosure,
I'm
the
chair
of
one
of
the
chairs
of
the
ssc
working
group.
Can
you
go
back
one
side?
L
I
think
the
the
statement
around
commands,
I
think,
is
super
important
here
right.
So
when
we
look
at
sse
and
what
it's
doing,
it's
it's
a
statement
of
fact
that,
in
the
view
of
the
transmitter
right,
so
my
only
concern
here
is
to
take
a
dependency
on
ssc
in
a
protocol.
Those
are
more
commands
than
signals
right.
Your
in
my
opinion,
at
least
the
way
I
understand
it
right
you
are.
You
are
using
this
method
to
convey
protocol
operations
versus
observations
by
a
single
party.
If
that
makes
sense.
So
that's
that's
my
only
concern.
L
I
I'm
super
happy
to
see
sse
getting
visibility
here.
That's
just
my
only
concern
right
because
that's
been
a
super
important
distinction
for
sse,
at
least
as
its
kind
of
position
say,
doesn't
mean
it
can't
change,
but.
E
All
right
leslie
was
on
so
I
guess
I'll
echo
that
and
also
say
that
that,
like
the
implementation
complexity
of
this
worries
me
a
little
bit
in
like
the
way.
If
I
were
starting
from
a
white
sheet
of
paper,
I
would
probably
limit
myself
to
just
signaling
that
something
has
happened
to
a
schema
resource
and
let
the
receiver
figure
out
what
to
do
with
that
information,
whether
it
means
re-synchronizing,
the
schema
resource
or
not.
E
B
Okay,
so
I'm
just
gonna
echo
phil
here
from
the
chat,
but
I'm
not
sure
tim.
It
might
have
been
to
your
comment
of
that's
why
I
anticipate
dbr
happening
over
buses
versus
cooperative
arrests,
yeah.
L
I
guess
I
guess
I
I
don't
think
you
want
two
different
skim
is
supposed
to
be
very
authoritative
correct.
So
I
don't
think
you
want
two
different
pieces
of
skim
for
the
operation
of
the
protocol.
That
one
is
do
what
you
want,
and
one
is
authoritative
like
that.
That's
concerning
to
me
right,
if
you're,
ultimately
yeah,
I
don't
know
that
again,
I
can
be
easily
convinced,
I'm
just
that
was.
B
Okay,
did
he
have
questions
that
I
didn't
okay,
so
phil
actually
had
questions
that
I
I
failed
to
go
through
so
in
the
interest
of
time.
B
We
actually
stayed
on
time
all
right.
Thank
you,
oh
one,
last
logistics
does
the
group
want
to
continue
with
the
every
four-week
virtual
for
progress
every
six
weeks.
B
I'm
going
to
say
every
six
weeks:
let
me
put
a
poll:
do
you
want
to
continue
with
an
every
six-week
virtual
ver
if
I
could
only
type
that
would
be
dangerous?
Okay,
can
you
raise
your
hand
if,
if
you
want
to
continue
with
every
six
weeks,
we'll
have
a
virtual.