►
From YouTube: Microsoft Graph developer community call-February 2020
Description
Jeremy Thake and Darrel Miller discussed Azure Auth SDK alignment and Fluent SDKs in this month's call.
Next call: March 3rd at 08:00am PST - https://aka.ms/MicrosoftGraphCall
More on Microsoft Graph SDKs - https://docs.microsoft.com/en-us/graph/sdks/sdks-overview
A
Thank
you
for
joining
another
Microsoft
graph
developer
community
call
I'm
actually
here
in
the
room
of
Darrell
Darrell's
in
town
for
the
week
from
Montreal,
he's
bought
the
ice,
we're
there
with
him.
Thank
you
for
that.
So
today
this
is
gonna,
be
a
bit
different
from
some
of
our
community.
Calls
that
we've
done
in
the
past.
We
really
are
encouraging
feedback
with
these
session,
and
we've
got
two
topics
that
are
strongly
aligned
around
SDKs,
which
is
Darrell's
world
within
Microsoft
graph,
so
you'll
be
hearing
mostly
from
Darrell
in
this
session.
A
Firstly,
around
we're
making
some
decisions
around
what
we're
going
to
be
doing
with
authentication
around
the
SDKs,
and
then
we
really
wanted
to
dive
deep
on
the
fluent
jeaious
SDKs,
based
on
some
feedback
and
discussions.
We've
been
having
the
lots
of
people
in
community
over
the
last
few
months
and
so
I'm
Jeremy
they're.
B
A
And
you
probably,
if
you're
not
following
us
on
Twitter
and
you're
on
Twitter's
I,
highly
recommend
following
us.
We
do
share
a
lot
of
different
things
around
the
graph
over
the
periods
of
days
in
between
all
the
different
other
channels
that
we
engage
with
you
on
so
without
further
ado.
Darryl
had
it
straight
over
to
you
to
talk
a
bit
more
about
the
major
SDKs.
Yes,.
B
Anis
Jeremy
said:
there's
there's
a
bit
of
a
theme
in
the
sessions
today
and
it's
a
lot
about
trying
to
find
ways
that
we
can
reuse
existing
efforts
and
so
that
we
can
deliver
more
value
without
Microsoft
and
community
duplicating
efforts,
but
also
without
moving
your
cheese,
because
we
all
know
that
folks
have
investments
in
stuff
and
when
we
take
stuff
away,
it
doesn't
really
help
anybody.
So
without
further
ado,
why?
Why
are
we
doing
this?
B
There's
a
few
motivations
with
regard
specifically
to
us
aligning
with
the
Azure
SDKs,
one
of
them
being
we've
had
a
request
from
the
developer
division.
Who
are
the
team
that
actually
owned
the
creation
of
the
Azure
SDKs?
They
want
to
align
with
grass,
specifically
because
they're,
seeing
more
and
more
scenarios
where
there's
crossover
between
people
who
were
developing
on
the
graph
and
people
who
were
working
with
Azure
resources
and
considering
when
it
comes
to
using
SDKs,
basically
under
the
covers
really
you're,
just
making
an
HTTP
call.
So
why
should
it
really
be
different?
B
We
also,
if
you
follow
the
work
that
we're
doing.
We
have
a
growing
backlog
of
stuff
and
people
keep
suggesting
awesome
ideas
and
we
don't
quite
have
enough
people
to
be
able
to
do
all
of
the
things
that
need
to
get
done
so
having
some
additional
help
being
able
to
reuse
existing
assets
from
the
outer
team
is,
is
just
bonus
for
everyone.
You
also.
B
The
thing
is
I
mean
we
have
to
learn
what
is
the
best
way
of
setting
up
an
environment
in
order
to
people
to
be
able
to
make
calls
reliably
efficiently,
be
able
to
get
good
tracing
observer
ability,
reliability
and
we've
got
two
different
teams
sort
of
doing
the
same
thing.
It's
good
for
us
to
be
able
to
learn
from
each
other,
and
hopefully
the
end
result
is
it's
best
for
you
folks.
So
what
are
we
trying
to
do?
Well.
B
First,
reduce
costs,
friction
from
customers
who
are
using
both
the
graph
and
Azure
leverage,
the
work
that
is
already
being
done
by
the
Azure
SDK
team
and
you'll
see
as
we
going
through
this.
Some
of
it
is
just
simply
aligning
on
guidelines
I
like
to
use
the
phrase
choosing
the
color
of
the
bike
shed.
In
some
cases,
we've
chosen
one
color
and
they've
chosen,
another
color
and
that's
just
unnecessary
friction
and
the
end
result
is.
B
We
have
to
build
a
better
experience
for
graph
customers
and
we
do
recognize
that
in
the
agile
world,
sometimes
things
are
done
a
little
bit
differently,
and
sometimes
there
are
reasons
for
the
fact
that
graph
does
things
differently
than
adder
and
therefore
where
it
does
make
sense
to
take
a
different
approach.
We
will
continue
to
do
what
is
best
for
the
graphic
customers.
We're
not
just
going
to
go.
Oh
yeah,
let's
just
take
the
azure
stuff
and
make
everybody
and
graph
use
that,
and
even
though
it
doesn't
quite
fit
we'll
just
save
ourselves.
B
The
trouble
the
end
result
is
to
produce
a
better
experience
so
before
I
continue.
I
just
want
to
give
a
kind
of
overview
of
how
the
SDKs
are
currently
built
and
how
all
the
pieces
come
together
and
sort
of,
compare
and
contrast
between
how
we
do
it
over
in
the
graph
world
versus
how
it
happens
in
the
Azure
world,
and
it
will
kind
of
set
the
scene
for
us
talking
about
the
alignment
work
in
areas
where
we're
not
going
to
align.
So
you
know
the
here
we
go.
B
The
way
it
works
in
graph
is
all
the
teams
who
build
services
on
the
graph
come
on.
They
describe
their
API
using
C
STL,
so
each
service
team
has
Destiel
so
C.
Stl
is
conceptual
schema
description,
language,
basically,
a
big
chunk
of
XML
that
describes
what
an
OData
API
looks
like
if
you've
ever
worked
in
using
entity
framework
and
done
EDM
models,
it's
basically
the
same
thing
as
EDM,
and
then
those
the
metadata
for
each
of
those
services
is
aggregated
together
and
then
exposed
in
a
public
endpoint
on
the
graph.
B
We
call
the
dollar
metadata
endpoint
and
you
can
query
that
for
both
v1
API
and
the
beta
API
to
see
all
the
description
of
the
graph
API
and
we
use
that
metadata
for
all
counts,
peruse
and
we're
using
it
for
more
and
more
over
time.
So
what
do
we
do
with
the
metadata?
Well,
we
feed
it
into
a
tool
called
typewriter,
which
is
kind
of
just
like
a
wrapper
tool.
B
It's
a
command-line
tool
we
built
and
all
of
these
tools
that
I'm
talking
about
all
live
in
github,
open
sourced
in
under
the
Microsoft
graph
repository,
of
course,
on
the
typewriter
lives
under
a
repo
name.
That's
called
something
like
m/s
graph,
STK
generator.
So
we
take
that
typewriter
tool
and
we
use
the
Viper
library,
which
is
another
older,
Microsoft
library
for
doing
code,
gen
using
t4
templates,
and
we
take
a
set
of
t4
templates
for
different
languages
that
we
support
and
we
output
a
set
of
libraries.
Now
before
we
actually
do
the
processing
I.
B
So
typewriter
then
emits
a
set
of
what
we
call
service
libraries
currently
we're
emitting
two
different
service:
libraries,
a
service
library
for
the
v1
API
and
also
in
some
languages.
We
do
it
also
for
the
beta
service
language
so
reach
the
languages
we
support.
We
produce
a
service
library
and
the
service
libraries
sit
on
top
of
two
other
handcrafted,
artisanal
libraries,
the
core
and
the
library.
B
They
provide
all
of
our
standard
cross-cutting
concerns,
and
it's
really
those
core
libraries
where
we
have
some
opportunity.
We
reuse
and
work.
Also,
the
author
I
brew
something
new
that
we
introduced
last
year
and
as
it
yet
it's
still
sitting
in
preview.
We've
had
a
lot
of
community
feedback
that
people
were
like.
Could
we
get
this
thing
GA
or
anymore
like?
B
Yes,
we
want
to
get
it
to
GA
so
that
people
can
use
it
across
because
as
many
companies
that
will
not
use
preview
stuff
as
people
building
samples,
but
don't
want
to
use
preview
stuff,
there's
been
some
versioning
issues,
because
the
EM
sol
library
that
we
use
under
the
covers
has
a
higher
version
need
for
dotnet
standard.
We've
been
waiting
for
some
of
the
other
languages
like
we
were
waiting
for
the
EM
cell
for
Java
to
be
released.
So
there's
been
some
delays
in
getting
that
GA
at
that
old
library
GA.
B
But
we
have
a
slight
change
of
plans
and
we'll
we'll
get
to
that
in
a
moment,
and
we
would
love
your
feedback
on
that.
So
this
is
kind
of
how
it
works
today
for
most
of
our
SDKs,
but
we've
actually
started
to
take
a
slightly
different
path
for
our
most
recent
SDK
that
we're
previewing,
which
is
the
PowerShell
library.
Now
the
powershell
ivory.
B
B
The
problem
with
using
Auto
rest
is
that
it
doesn't
take
CS
DL,
isn't
in
what
it
takes,
opening
the
eye
as
an
input,
and
that's
not
how
we
describe
graph.
So
in
order
to
make
that
happen,
we
first
have
to
create
an
open,
API
description
of
the
graph
and
we've
done
that
by
creating
this
is
a
dev
X
API
that
provides
a
service
for
generating
open
API
descriptions
from
the
CSD
LM
that
same
demux,
api
actually
powers,
a
bunch
of
other
experiences.
B
It
powers,
our
new
graph
explorer
with
new
permissions
thing
and
the
samples
and
the
code
generation
that's
in
there
and
the
code
generation
is
also
used
by
api.
Dr
in
order
to
generate
the
samples,
the
snippets
that
you
now
see
in
the
reference
documentation,
and
so
we
generate
that
open
API,
just
those
that
open
API
description
and
we
feed
it
into
auto
rest.
B
However,
this
will
not
sit
on
top
of
net
corn
metaphors.
It
will
sit
on
top
of
the
Python
core.
Now
those
of
you
who
are
familiar
with
our
offerings
at
the
moment
might
be
saying,
but
wait
you
don't
have
a
Python
core
SDK
at
the
moment.
That's
right,
that
is
part
of
our
roadmap
for
delivering
the
CLI
is
to
produce
a
Python
core,
Python,
all
libraries
to
power,
the
Microsoft
graph
CLI,
one
of
the
reasons
that
they
are
using
Python.
Is
it
if
you've
ever
used
the
AZ
CLI?
B
They
have
a
notion
of
being
able
to
query
the
Jason
responses
that
come
back
with
james
path,
which
is
a
python
library
and
also
the
CLI
not
only
provides
a
command-line
tool,
but
generator
has
the
ability
to
generate
what
I
call
magic
modules,
which
can
then
be
used
to
generate
terraform
and
ansible
modules.
So,
if
the
parent
plan
goes
through,
we
will
have
a
wide
range
of
support
for
the
many
different
pieces
infrastructure
tools
that
people
use
for
provisioning
infrastructure
and
the
story
should
be
identical,
whether
you're
provisioning
to
Azure
or
provisioning
to
the
graph.
B
A
Are
any
questions
so
I
am
I
was
waiting
to
get
to
the
point
say.
Essentially,
if
you
have
questions
because
there
are
nearly
170
in
the
call,
if
you
can
put
them
into
the
chat,
I
will
then
find
a
good
time.
Wait.
Daryl
takes
a
breath
to
ask
him
a
question
and
some
coffee
and
some
coffee,
so
yeah
I
will
keep
an
eye.
There's
been
none
in
there.
So
far,
cool.
B
So
I
sure
to
themselves
their
SDK
team
are
going
through
a
bit
of
a
transition
to
in
the
way
that
they
are
building
SDKs
and
so
on.
What
I
mean
to
know
is
describing
their
new
model
and
there
are
a
number
of
their
api's
that
are
supporting
the
new
model.
As
a
consumer
guest
case,
you
probably
won't
notice,
is
fairly
seamless
their
transition
from
the
old
models
of
the
new,
but
it
helps
them
provide
a
better
experience
over
time.
B
So
the
way
that
it
works
over
in
Azure
line
is
each
of
the
hundred
and
ten
service
teams
each
creates
by
hand
and
Jason
document
there's
an
open,
API
description
of
their
service
and
they
feed
that
open
API
description
into
auto
arrest,
which
has
a
set
of
generators
and,
that's
supposed
to
say,
ultra
stop
C
sharp,
but
PowerShell
powershell
PowerPoint.
There's
too
many
power
products
chopped
up
some
of
my
letters
there.
B
So
they
have
a
set
of
generators
because
auto
rest
is
a
pluggable
model
to
do
a
whole
bunch
of
generation
and
they
output
for
each
language
and
each
service.
A
deployed
package
which
is
a
set
of
generated
code
and
the
client
for
that
particular
service,
and
it
sits
on
top
of
as
your
core
and
as
your
identity,
which
are
their
equivalent
to
our
graph
core
and
graph.
B
All
the
big
difference
really
is
is
that
in
Azure
they
Craig
a
separate
deployed
package
for
every
single,
sir
and
those
packages
are
versioned
independently,
because
the
API,
the
underlying
API,
is
our
version
independently.
Also,
whereas
graph,
we
version
the
entire
graph
API
together.
So
all
services
are
version
together
and
as
of
today,
we
really
only
have
the
beta
API
and
the
one
API.
So
really
there's
only
those
two
different
versions
out
there:
the
azure
services,
some
of
them
version
once
every
six
months.
Sometimes
you
know
once
a
year
type
of
thing.
B
The
interesting
thing
is
that
as
our
core
and
as
your
identity,
because
they
those
libraries,
are
doing
a
lot
of
the
similar
things
to
what
we
are
doing
in
Azure
core
of
a
graph
choreographed
auth,
and
this
is
where
we
believe
we
can
get
some
alignment
that
will
bring
some
value.
So
one
of
the
first
things
that
we
want
our
considering
is
and
what
I
say
considering
I
mean
considering
very
very
strongly,
is
taking
our
off
libraries
that
we
have
not
yet
g8
and
saying:
let's
not
GA
them,
let's
just
adopt
the
Azure
identity.
B
You
different,
credential
token
credential
class
provide
it
with
your
credentials
and
then
you
pass
it
into
the
service
client
and
and
then
that
token
grant
class
takes
care
under
the
covers
of
actually
going
beginning
the
token
when
it's
needed
using
the
middleware
pipeline,
so
architectural
II.
Our
two
approaches
are
very,
very
similar,
the
difference
being
that
we
provide
a
different
API
for
consumers
of
these
two
old
libraries
to
actually
interact
with
these
classes.
B
B
The
idea
is,
you
would
be
able
to
create
one
token
credential
class
and
pass
the
same
token
credential
class
into
both
the
graph
service,
client
and
whatever
eyes
your
service
library
you're.
Currently
using
this,
hopefully
gives
you
as
a
consumer
a
better
experience.
It
reduces
the
chances
of
or
it
reduces,
the
effort
of
our
two
teams
implementing
basically
the
same
code,
and
this
is
becoming
more
and
more
the
case
now,
as
Asia
is
moving
to
the
aad
or
the
azure
ad
identity.
B
The
two
end
point:
in
the
past,
a
lot
of
ours
still
used
the
v1
end
point
and
people
were
still
using
a
now.
The
azure
SDK
team
have
made
an
effort
to
remove
everything
over
to
using
the
v2,
endpoint
and
MSL,
where
possible,
which
aligns
with
our
approach
on
graph
and
the
token
credential
classes
is
interesting
cuz.
It
is
quite
a
simple
model
in
that
you
know
what
credentials
you
provide.
Do
you
have
a
client
secret?
Do
you
want
to
do
a
device
code?
I've?
B
So
that's
where
you
could
take
a
website
and
you
can
identify
the
website
as
having
an
identity
and
grant
a
role
to
that
identity.
We
currently
don't
support
mechanism
in
Graf
today,
and
the
Azure
identity
does
so
we
would
win
by
that.
The
Azure
identity
also
have
an
interesting
concept
that
they
call
default
credentials
or
default
token
credentials,
which
uses
kind
of
a
fallback
mechanism
where
it
goes
and
says.
Is
there
a
managed
identity?
That's
going
to
give
me
credential.
No
okay!
Well,
are
there
environment
variables
that
are
going
to
give
me
credentials?
B
Oh
no,
okay!
Well,
then,
I'll
fall
back.
Is
there
something
in
the
token
cache
that
can
give
me
a
credential,
nope?
Okay,
fine,
then
pull
back
all
the
way
to
last
but
not
least,
actually
doing
an
interactive
experience
in
Azure
world
I.
Think
a
lot
of
people
use
that
default,
credential
experience,
so
that
it's
you
can
write
code
that
will
lift
and
shift
and
move
into
different
environments,
and
it
makes
it
easy
to
support
in
CI
CD
scenarios
and
of
course,
you
know
many
eyes
make
bugs
shallow.
B
A
Sorry,
yes,
there
was
a
question
in
here
from
coke
Munroe
I'm,
currently
working
with
grow
a
free
power
show
only
and
as
long
as
I
can
still
use
certificate
based
authentication
to
connect
to
the
graph
continue
to
manage
growth.
Resources
in
unattended
scripts
delivers
a
CVA
connection.
It
doesn't
matter
to
me
whether
we
use
Microsoft
graph
or
for
our
identity,
so
I
guess.
The
question
is,
is
how
in
fact
PowerShell
vs.
c-sharp
versus
other
things
yeah
then.
B
A
B
So
I'm
glad
Kirk,
you
said
you
don't
really
care
because
hopefully
it
would
be
transparent.
It
wouldn't
change
any
of
your
scripts
today
it
would
just
bring
additional
feature,
build
features,
each
abilities,
features
and
hopefully
proof
quality.
Those
are
the
two
things
that
it
would
bring
for
the
PowerShell
developer,
yeah.
A
B
And
I'm
often
oh,
the
top
of
my
head
I,
don't
know
but
I'm
going
to
assume
that
the
azure
sdk
token
credentials
support
going
and
getting
a
secret
from
chemo
right,
because
it
would
be
kind
of
silly
for
the
azure
team
to
build
SDKs
that
don't
go,
get
it
from
chemo.
And
we
we
just
inherit
that
automatically.
Okay.
A
B
It
helps
the
identity
team
to
because
now
that
the
identity
team
are
split
in
Sephora,
supporting
multiple
different
customers,
they're
trying
to
support
the
needs
of
Azure
they're
trying
to
support
the
needs
of
graph
and
we're
gonna
go
to
them
and
go
yeah.
Well,
we're
the
same.
Basically
the
same
customer.
Now
we
just
have
some
additional
scenarios,
because
at
the
moment
the
azure
sdk
token
crinkles
don't
have
great
support
for
web-based
scenarios
like
auth
code
flow
and
on
behalf
of
flow,
though
we
will
be
contributing
back
additional
token.
B
B
It's
gonna
be
just
magic,
so
this,
if
you
go
to
Oakland
telemetry
dot
IO,
it
is
a
becoming
an
industry
standard.
It
was
recently
formed
as
a
merge
of
two
different
teams,
open
census
and
open
tracing,
which
was
two
different
standards
in
the
world,
because
you
can
never
have
enough
standards.
They
came
together
to
make
open
telemetry.
It
is,
as
I
say,
an
industry
standard
way
of
doing
distributed,
tracing
and
it
is
vendor
neutral.
Microsoft
are
a
active
part.
B
Google
are
actively
involved
and
they
are
writing
a
set
of
libraries
that
allow
clients
to
collect
data
about
calls
that
they're
actually
going
to
make
pass
that
information
along
the
wire.
So
the
tracing
systems
like
a
pin
sites
like
dynaTrace,
like
stackdriver
like
or
only
other
major
APM's
application
performance
monitoring
tools
can
ingest
and
what's
really
cool
about.
This
is,
if
you
are
Prem,
win
the
big
application,
that's
using
ones
that
set
a
one
company
to
do
distributing,
distributed
tracing
stuff,
and
then
you
call
in
to
add
your
services.
B
They're
gonna
be
able
to
bring
all
our
data
together
and
give
you
a
single
visualization
of
the
trace.
Calls
that
happen.
End
to
end
because
everybody
is
using
this
industry
standard
and
I
believe
the
goal
is
this
to
be
part
of
what
they
call
as
your
fundamentals.
So
this
is
a
base
piece
of
infrastructure
that
every
service
should
support
and
the
azure
sdk
team
have
already
started
were
in
adopting
this
into
their
SDKs.
We
were
about
to
start
doing
it.
B
We've
done
some
investigations
and
proof
of
concepts
into
it,
but
by
aligning
with
the
Android
SDK
team,
we
will
be
able
to
just
pull
our
stuff
in
and
then
you
will
be
able
to
get
visibility
into
how
your
apps
are
making
calls
and
collect
metrics
on
those
calls
that
are
happening,
so
that
should
just
come
for
free.
That's
the
theory
next
area,
which
probably
will
have
less
of
an
impact
on
consumers
other
than
the
fact
that
we
should
do
a
better
job
of
bringing
to
you
consistent,
reliable,
regular
updates
is
we're.
B
Just
gonna
rely
on
engineering
processes.
The
azure
sdk
team
spent
quite
a
bit
of
time,
actually
writing
down
their
guidelines
as
to
how
they
manage
their
source,
control
and
version
identifier,
some
where
they
put
packages
and
roster
ease
and
we're
just
going
to
follow
that
same
model.
So,
if
you're
familiar
with
the
way
the
azure
SDKs
are
managed,
we
should
end
up
being
very
similar
to
the
way
that
they're
doing
it.
So
these
are
the
high-level
items
and
there
are
some
smaller
items
were.
B
We
are
doing
little
bits
of
alignment
on,
but
these
are
the
major
pieces
it
now.
Let's
talk
about
areas
where
we
aren't
going
to
align
or
maybe
won't
align-
and
this
is
one
big
piece
that
we
have
built
into
the
graph
SDKs,
where
we
take
quite
a
different
approach.
Both
the
graph
SDKs
and
the
azure
SDK
is
use
a
middleware
pipeline
for
providing
cross-cutting
concerns.
B
However,
we
made
a
very
explicit
decision
to
use
the
native
pipeline
of
whatever
was
the
native
library
we
chose
to
use
so,
for
example,
in
dot.
Now.
Obviously
it's
the
the
system,
HTTP
HP
client
in
Java,
we
use
an
Android,
we
use
okay,
HTTP
and
so
on
and
so
forth,
and
what
the
reason
why
we
use
the
native
pipeline
was
because
we
had
customers
who
were
very
much
in
brownfield
scenarios.
They
had
an
existing
app.
They
currently
were
not
using
the
SDK.
B
They
currently
are
using
the
native
library
throughout
their
entire
application,
and
yet
they
wanted
the
benefits
of
our
retry
handling.
Our
redirect
handling
compression
stuff
those
pieces
of
middleware
that
we've
built
the
auth
providers
and
what
we
were
able
to
provide
is
a
client
factory,
but
just
simply
allowed
them
to
change
one
line
of
code
in
their
application
where
they
create
that
HTTP
client.
We
would
pre
build
an
HP
client
with
the
middleware
pipeline
in
place
and
imagine
weekly
everywhere
within
their
application,
where
they
are
making
calls.
B
The
middleware
pipeline
would
transparently
start
adding
the
value
of
the
SDK,
and
then
they
have
the
option
over
time
incrementally
and
the
strongly-typed
capabilities
of
our
request
model
builders
and
model
libraries
that
come
with
the
rest
of
the
SDK.
Now,
for
from
a
philosophically,
our
SDKs
took
a
different
approach
where
they
have
built
a
middleware
pipeline,
but
in
order
to
keep
that
middleware
pipeline
consistent
across
languages,
they
design
their
own
custom
pipeline
that
sits
in
front
of
that
HTTP
library.
B
We
don't
want
to
do
that
because
we
think
it
breaks
brownfields
scenarios
and
we
are
working
with
the
Azure
SDK
team
to
try
and
see
if
there's
a
compromise
and
the
Azure
SDK
team
already
have
the
capability
of
taking
their
custom
pipeline
and
actually
inserting
it
into
a
piece
of
pipeline
that
exists
inside
the
native
library.
So
there
are
workarounds,
but
this
is
an
area
where
this
is
a
capability,
we're
not
going
to
lose,
and
maybe
we
can
work
and
find
something
that
works
for
both
graph
and
as
your
SDK
is
here.
B
B
That's
written
by
hands
separate
from
separated
from
our
generated
code.
So
this
is
another
area
where
you're
going
to
stay
different.
And,
finally,
the
big
topic
really
is
versioning
graph
takes
a
very,
very
different
approach
to
versioning
than
the
way
a
sure
does
and
for
a
variety
of
different
reasons
that
are
valid
in
both
cases.
I
sure
is
so
much
more
focused
on
deployment
type
scenarios,
provisioning
type
scenarios,
and
it
has
its
arm
deployment
templates
as
a
way
of
doing
idempotent
updates
and
this
they
have
taken
approach.
B
That
requires
people
to
be
very
strict
about
changes
and
therefore,
even
adding
an
optional
field
in
many
cases
requires
a
version.
Change
graph
has
a
much
more
evolutionary
approach
to
versioning,
and
this
is
why
we
have
a
different
way
that
we
release
packages.
We
have
one
package
for
all
services,
but
then
a
separate
package
per
version,
whereas
SDKs
one
package
for
all
versions
but
separate
packages
per
service.
B
It's
just
based
on
the
way
API
servers,
'end
and
I-
don't
see
that
aligning
anytime
soon,
just
because
of
well
history
and
scenarios
that
it
is
aimed
to
meet
so,
which
brings
me
to
the
final
slide
and
I'm
looking
a
little
further
out
medium
term
to
long
term
now,
I
mentioned
that
our
code
generator
tool
is
using
VIPRE,
which
is
the
fairly
old
tool
that
uses
t4
templates
for
generating
and
doesn't
really
have
a
whole
lot
of
engineering
support
around
that.
So
it's
always
a
little
scary.
B
When
we
have
to
go
in
and
make
changes,
we
would
like
to
align
on
using
auto
Wrath
as
our
code
generation
tool.
That
is
the
company
tool
for
generating
code
in
SDKs,
and
it
makes
the
most
sense
for
us
to
use
that
what
autores
currently
can't
do
is
it
currently
doesn't
generate
fluent
api's.
They
either
team
have
done
some
experimentation
around
fluent
api's.
Specifically,
they
do
a
fluent
api
for
java,
which
I
think
is
largely
hand
crafted.
B
They
did
do
some
experimentation
for
fluent
in
dotnet,
but
the
community
were
not
big
fans
of
it
and
I
think
it's
partly
due
to
the
way
that
the
you
are
eyes
are
created
and
designed
in
Azure.
Every
Asha
service
starts
with
subscription
and
then
resource
group,
but
then
resource
provider
and
then
actually
gets
into
the
meet
if
you're
having
to
do
that
navigation
in
every
fluent
API.
It
just
kind
of
feels
pointless.
B
So
the
graph
API
design
is
just
quite
different
and
is
more
amenable
to
the
fluent
style
and
we
have
as
we'll
just
as
we
move
into
the
next
topic.
We've
got
a
lot
of
community
feedback
that
people
like
the
fluent
API
that
exists
in
graph.
So
the
reason,
one
of
the
reasons
there's
no
water
generation
with
fluent
API
in
ought.
B
Three
can
describe
those
navigation
properties
with
a
concept
it
has
called
links
and
we
need
to
work
on
our
generation
of
opening
of
our
open
API
description
to
bring
in
links
that
describe
the
navigation
properties,
and
then
we
need
autors
to
support
a
concept
of
links.
This
is
where
we're
hoping
to
encourage
the
autores
team
to
go,
and
if
we
can
get
them
to
and
support
an
authoress
for
links,
then
we
will
be
in
a
position
to
actually
start
generating.
B
A
Was
a
question
around
the
exchange
online
management
API,
so
not
quite
relevant,
so
there's
gosh
we
have
in
today,
but
one
thing
I
do
want
to
bring
up
is
on
the
user
voice.
Often
if
you're
asking
about
like
are
things
coming,
our
PM's
are
chased
by
our
team
across
the
different
parts
of
Microsoft
to
ensure
that
they're
keeping
those
requests
up
to
date,
and
so,
for
instance,
with
the
exchange
online
management
API
is
there?
Is
a
user
voice?
Ask
there
were
the
best
70
votes?
A
If
you
haven't
voted
there,
I
really
encourage
you
to
do
to
go
vote
on
the
things,
because
it
does
influence
our
ability
to
encourage
those
teams
to
get
those
things
added
to
their
backlogs
and
plan
for
the
next
semester
of
work
in
the
instance
of
abusive
voice.
What
it
will
do
is,
if
you
logged
in,
to
use
a
voice
and
voted
when
the
status
of
those
users
requests
actually
change
you'll
be
notified
by
email
of
them.
A
So
that's
the
best
way
for
us
to
communicate
as
scale
as
things
get
and
into
being
in
review
of
a
planning
cycle
or
are
actually
put
into
the
backlog
or
arrive
inside
of
or
so,
if
they're
in
progress
or
they
arrive
inside
of
the
beta
or
in
point.
So
please,
if
there's
some
piece
of
interest
of
you,
please
go
and
check
that
out
on
use
voice
and
put
that
up
and
you'll
be
subscribed.
A
So
Andy
Lynch
asked
a
question
which
is
what
is
being
done
to
increase
the
stability
of
the
underlying
services
which
the
SDKs
access.
We
then
can
to
the
number
of
issues
around
service
stability
Andy,
let's
park,
that
till
the
end.
If
we
don't
get
time,
please
reach
out
to
me
I'd
like
to
understand
what
those
service
issues
are.
I
think
it'd
probably
be
easy
to
take
it
offline,
because
it'd
be
very
specific
to
your
particular
scenario.
In
general,
we
do
communicate
in
these
service
outages
on
the
graph
through
the
graph
blog
on
graph.
A
B
A
token
credential
and
stay
tuned
I
will
be
producing
some
samples
soon.
That
show
you
how
this
is
gonna
work,
but
it's
good
for
us
to
get
some
feedback
from
community
insofar
as
yeah
okay.
As
long
as
it
doesn't
do
this
and
do
that
I'm
fine
with
it
or
no
absolutely.
This
is
gonna
cause.
All
these
kind
of
problems
and
I
have
experience
with
the
way
a
sure
works,
and
it's
not
gonna
work.
This
way
for
grant
like.
A
B
B
You
know
sometimes
we're
wrong,
never,
okay,
so
moving
on
this
is
again
about
alignment.
We've
got
different
teams
overlapping
in
effort,
and
that's
not
good.
Let's
figure
out
how
we
can
solve
this
problem.
We
have
the
Microsoft
graph,
JavaScript
SDK
and
the
JavaScript
SDK
has
a
core
library,
but
it
does
not
have
a
fluent
API
and
there's
actually
a
user
voice.
More
people
are
requesting
a
fluid.
So
if
you
feel
that
we
definitely
should
have
a
fluent
API,
please
go
and
vote
there,
even
though
we
are
planning
to
do
it
anyway.
B
Having
that
extra
vote
helps
us
justify
the
work
that
we're
doing
to
our
management,
because
they'd
like
to
know
that
we're
doing
the
things
that
we're
supposed
to
be
doing.
On
the
other
hand,
we
also
have
the
PN
pjs
team
project,
which
comes
from
the
PMB
and
I
know.
A
lot
of
folks
in
the
graph
community
have
come
from
the
SharePoint
world
and
the
SharePoint
world
is
now
turning
into
a
bigger
M
365
world,
and
this
library
also
does
similar
things.
If
you
read
the
description
here,
it
says
talking
about
calling
Microsoft
graph.
B
Rest
API
is
in
a
type
safe
way
and
one
of
the
things
that's
always
always
frustrated
me
before
I
joined
Microsoft,
was
when
Microsoft
produced
two
ways
of
doing
things
and
I
didn't
know
why
you
should
use
one
versus
the
other,
and
it's
it's
just
one
of
those
kind
of
decisions
that
really
people
shouldn't
have
to
make
I
think
the
value
of
coming
from
the
PMP
there's
a
lot
of
other
PMP
infrastructure
and,
if
you're
familiar
with
that
way
of
working.
The
PMP
jeaious
has
a
lot
of
cool
integration
points
with
the
sharepoint
ecosystem.
B
And,
if
that's
where
you're
coming
from,
then
maybe
it
makes
a
ton
of
sense
to
use
this
particular
library.
If
you're
not
coming
from
this
world,
then
the
graph
SDK
produced
by
the
graph
SDK
team,
my
team
well
here,
are
the
ones
that
are
funded
to
actually
build
javascript
cross.
All
of
graph
and
one
of
the
big
differences
in
the
graph
spore,
the
PM
pjs
team,
doing
which,
if
you
go
I,
think
if
I
dig
into
here,
yeah
I'm
not
going
to
do
that
in
there
is.
B
There
is
code
for
calling
the
graph
and
there
is
a
fluent
API,
but
it's
handwritten
fluent
components
that
only
cover
a
small
surface
area
of
the
graph,
because
the
idea
of
trying
to
cover
the
entire
surface
area
of
the
graph
would
be
extremely
difficult
to
do.
And
it's
something
that
would
take
a
lot
of
community
effort
in
order
to
do
that.
So
what
we
are
trying
to
do
and.
B
B
So
let's
just
take
a
quick
look
at
a
few
examples,
so
looking
at
this
first
example,
which
is
in
PJs
going
and
getting
a
list
of
contacts,
so
you
start
with
a
root,
client
objects
and
say
dot
users
and
then
use
get
by
ID
to
go
and
get
a
particular
user
and
then
say
dot
contacts
and
it
returns
back.
A
list
of
contacts
and
this
fluent
API
effectively
allows
you
to
build
up
the
the
requests
that
you're
making.
B
Now,
if
I
go
back
and
look
at
our
Doc's
page
for
the
graph
SDKs
and
if
I
flip
over
to
the
JavaScript
example
here,
you'll
see,
we
don't
have
a
fluent.
You
actually
have
to
build
that
URI
manually
yourself.
So
you
need
to
know
in
this
case
it's
actually
hitting
whack
me
instead
of
whack
users,
but
use
your
imagination.
If
you
build
a
string
that
actually
has
whack
users
and
then
the
email
address,
what
contacts?
B
That's,
how
you'd
call
it
JavaScript,
whereas
in
c-sharp
we
actually
have
a
fluent
API
that
allows
you
to
do
graft,
client,
dot
me
dot,
contacts,
dot,
request
and
then
get
acing,
it's
fairly
similar
to
the
way
it
works
in
the
PM
PJs.
And
if
you
imagine
a
little
bit
of
JavaScript
syntax
on
there
instead
of
c-sharp,
you
can
squint
and
kind
of
see
how
it
might
work.
B
Let's
take
a
look
at
another
example
close
this
beforehand
with
way
too
many
windows
open,
creating
something
so
here
is
creating
an
invitation
as
a
guest,
and
you
simply
do
graph
invitations
that
create
and
pass
in
some
parameters
to
that
particular
krei
method.
If
we
go
over
to
our
docs
page
for
creating
an
invitation
and
scroll
down
here
to
the
actual
request
you
can
see
here,
we
go
and
see
sharp
you
create
an
invitation,
object
and
then
do
graph
client,
dot
invitation,
stop
request,
add
asing
not
terribly
different.
B
If
you
look
in
JavaScript,
it's
also
not
that
different,
the
only
difference
being
is
you
currently
have
to
construct
the
uri
yourself.
So
if
we
take
that
fluent
nest
over
from
c-sharp
and
bring
it
into
ja,
gret
there's
no
reason
why
we
couldn't
Auto
create
this
kind
of
syntax
in
order
to
prevent
community
having
to
generate
this
for
the
thousand
I
think
last
count,
we
were
guessing
there's
around
three
thousand
endpoints
on
the
beta
graph.
The
API
looking
at
getting
an
individual
object,
close
that
one
and
close
that
one.
B
So
here
we
do
graph
teams
get
my
ID
going
and
retrieving
an
individual
ID
turns
back
a
team
if
I
go
and
look
and
I'm
gonna
just
cheat
a
little
bit
here
and
look
at
getting
groove
because
for
some
reason
the
sample
for
getting
a
team
isn't
rendered.
I
need
to
talk
to
the
team's
team
to
find
out
why
that
is
here.
We
have
graph
client
groups
and
we
in
c-sharp
were
able
to
overload
the
index
operator
in
order
to
go
and
retrieve
a
particular
group
that
might
not
be
possible
in
JavaScript.
B
That's
quite
reasonable
for
us
to
follow
that
same
pattern
that
is
used
in
PNP
GS
to
use
get
by
ID
would
be
logical.
So,
let's
just
look
at
a
more
sophisticated
example.
Now
in
PN
PJs
they
have
implemented
a
mechanism
of
batching
and
you'll
see
here
they
start
off
by
saying
or
create
a
batch
and
then
using
a
fluent
interface.
They
create
a
bunch
of
other
requests,
go
get
a
list
of
web
sites
and
then
go
get
the
top
two
and
then
go
and
select
the
title.
B
B
So
the
the
whole
idea
here
is
that
we
believe-
and
you
can
go
see
these
examples
are
in
the
wiki.
We
believe
there's
really
not
a
long
leap
between
what
the
PM
pjs
fluent
API
is
doing
from
what
we
are
able
to
generate.
What
is
the
the
the
difference
is:
there's
a
core
library
underneath
PN
pjs
that
has
a
bunch
of
functionality.
It
does
things
a
little
bit
differently
than
the
SDK
and
we
don't
want
to
break
customers
that
are
already
using
that
p.m.
PJs.
B
And
if
people
are
used
already
using
the
JavaScript
SDK,
then
they
will
be
able
to
use
that
same
fluent
API
and
over
time.
We
can
bring
those
two
communities
together
and
the
code
that
each
like
each
team
are
using
will
be.
The
same,
will
have
the
same
experiences
we'll
be
able
to
consistently
report
bugs
with
regards
to
it,
and
we
can
not
waste
effort
duplicating
more
code.
That
is
unnecessary.
A
B
We
are
in,
let's
call
it
early
design
things.
Yes,
we
are
going
to
get
a
strong
feel
for
how
this
should
come
together
before
we're
gonna
start
generating,
and
the
thing
is
once
we
figured
out
what
it
looks
like
across
the
various
set
of
scenarios.
There's
not
a
lot
of
code,
that's
necessary
in
order
to
do
the
code,
gem
yeah,
it's
about
getting
it
right
and
making
sure
we've
got
solutions
for
all
of
the
edge
cases,
and
we
also
want
to
do
some
cleanup
on
the
C
sharp
code.
B
A
A
A
B
It
takes
a
lot
of
investment
to
go,
look
at
a
project
that
is
in
progress,
yeah
and
actually
provide
feedback
and
understand.
Well,
what
is
the
team
thingy
and
we
are
really
trying
with
our
design
repo,
to
try
and
wear
our
hearts
on
our
sleeves
and
put
out
our
requirements
out
in
the
open.
So
you
can
see
what
we're
thinking
yep
and
we'll
get
better
at
that
over
time.
But
just
if
you
have
opinions
feel
free
to
share
them
and.
A
So
the
the
message
window
is
very
quiet,
so
I've
interested
other
people
already
using
the
PM
PJs
to
work
against
the
graph
API
we
lit
up
telemetry,
while
going.
We
saw
that
there
was
some
usage
of
it
around
the
graph,
but
most
of
the
PPG
seems
to
be
against
the
ship.
Went
REST
API,
isn't
not
look
at
the
graph.
Okay,
so
I
can
see
one
yes
here,
so
it
really
appreciate
it.
B
Samples
and
stuff
it'll
share
yeah
and
if
you
are
familiar
with
the
PNP
J's
fluent,
if
there's
areas
where
you're
like
there's
no
way
they're
gonna
be
able
to
generate
this.
Please,
like
let
me
know
like
tell
me
the
word
that
you
think
there
are
going
to
be
the
really
hard
parts
that
maybe
don't
fit
into
a
standard
pattern,
and
maybe
don't
have
the
metadata
description
in
CS
DL
to
make
it
happen
that
those
are
the
bits
that
I
want
to
know
which
are
gonna,
be
the
painful
parts
awesome.
A
Okay
and
then
so,
it's
just
a
quick
bit
of
wrap
up
on
this
core.
Thankfully,
that
everything
was
super
useful
when
the
slides
and
this
recording
will
be
available,
I
off
the
tire
after
the
session,
probably
by
usually
like
Wednesday
afternoon
or
Thursday.
We
also
have
a
podcast
that
Paul,
who
was
asking
some
of
the
questions
in
the
call
I
do
and
we
had
Kevin
on
profile
api's.
We
had
vesser
from
the
sharepoint
engineering
team
won
about
SPF
x.
One
point
ten.
A
If
you
haven't
listened
to
those,
I
highly
encourage
you
know
to
check
those
out
there
are
a
series
of
other
community
calls
as
well.
So
if
you
kind
of
discovered
this
one
right
being
plugged
into
the
graph,
please
be
aware
that
you
know
there's
a
variety
different
sharepoint
calls
that
go
on
around
SPF,
x
and
general
dev,
and
then
a
sharepoint
call
as
well
as
things
like
teams.
Do
one
and
identity
from
an
old
does
a
specific
one.
Do
as
well
that
encourage
you
to
go.
A
Make
sure
that
you're,
aware
of
all
these
calls
and
subscribe
to
the
ones
that
are
relevant
to
you.
If
you
have
something
you'd
like
to
present
or
share
with
the
community
around
the
graph,
please
reach
out
to
me:
Jane,
take
and
Microsoft
calm,
we'd
love
to
see
more
people
from
the
community,
contributing
and
sharing
on
these
community
cause.
A
I
know
that
the
sharepoint
community
calls
have
a
law
community
engagement
and
I'd
really
like
to
try
and
mirror
that
here
on
these
calls,
and
so
please
reach
out
if
there's
anything
you'd
like
to
present,
there
is
one
person
I'm
talking
to
at
the
moment
to
try
and
schedule
up,
and
it
would
be
really
really
great
to
have
a
few
more
people
that
are
willing
to
share
what
they're
doing
with
the
graph
and
so
forth.
And
then
there
are
a
few
questions
crawling
in.
A
So
let
me
just
quickly
reduce
I
heard
that
the
PMP
support
the
graph
will
end
because
Ms
wanted
that.
If
so,
when
is
that
going
to
happen?
So
that
wasn't
the
case,
there
was
definitely
a
communication
issue
between
the
graph
team,
Darryl
and
I,
and
the
PM
pjs
leaders
like
Patrick,
Rodgers
and
VESA.
We
didn't
want
it
to
end.
A
We
just
know
that
strategically
the
direction
of
generating
the
Microsoft
graph
API
is
on
the
sdk
is
something
that
can't
really
be
done
as
a
hand-cranked
thing,
because
of
the
fact
that
pretty
much
every
month
is
a
whole
brand-new
work
quite
appearing,
like
I,
think
we're
up
to
about
50
different
workloads
now
on
the
graph,
and
so
we
as
Microsoft
need
to
provide
a
JavaScript,
SDK,
Don,
SDK,
etc.
That
has
full
coverage
of
the
graphs
at
the
answer
be
consumed.
We
don't
want
customers
being
frustrated
that
only
certain
elements
of
the
graph
are
supported.
A
We
absolutely
heard
loud
and
clear.
The
fluent
was
one
of
the
reasons
that
they
were
using
p.m.
PJs,
and
so
we
wanted
to
have
this
call.
We've
been
having
discussions
alone
for
a
long
time
now
to
ensure
that
the
graph
SDK
for
JavaScript
has
the
same
fluent
capabilities
in
hope
that
you
know
we'll
kind
of
have
the
best
of
both
worlds.
There
there
was
definitely
a
miscommunication,
and
we
had
no
time
said.
Don't
do
this.
A
It
was
more
of
a
case
of
this
makes
more
sense
to
do
it
at
scale
for
a
dedicated
team,
and
you
know
we're
always
going
to
have
a
graph
SDK,
and
so
we
want
to
make
sure
we
provide
the
best
experience
for
app
developers,
so
apologies
for
any
mixed
messages
there.
On
that
Robert
thorn
said,
there
was
some
mention
of
a
couple
calls
ago
about
the
ability
to
grant
api
permissions
more
granular
basis
on
a
group
basis
and
was
to
be
discussed
the
future
core.
Is
there
any
news
on
this?
A
So
the
big
thing
about
the
graph
API
is
more
grainy
permissions.
We
had
a
call
on
resource
specific
consents,
specifically
around
the
team's
API
is.
We
are
now
ecstatic.
Well,
we
talked
about
in
a
build
in
May
of
last
year
we
announced
it
at
ignite
four
teams.
There
is
still
work
going
on
there
around
the
resource,
specific
consent.
There
has
been
some
lag
there
and
just
to
be
very,
very
clear
on
the
expectations
of
what
resource
pacific
consent
means
right
now
the
only
api
is
on
the
graph
that
will
support.
It
are
the
team's
api's.
A
So
the
scenario
that
we're
unlocking
with
the
result,
specific
consent
is
that
a
team's
admin
or
even
a
team's
owner
can
deploy
a
team's
app
and,
at
that
point
in
time,
can
consent
permissions
to
teams.
Api.
Is
that
the
benefit
of
this
from
a
team
scenarios?
Is
that
right
now
to
use
teams
API,
as
you
would
be
required
to
do,
I'd
been
consents
and
we've
resolved
specific
consent.
It
is
not
doing
it
for
all
teams,
it's
doing
it
within
the
scope
of
the
team.
A
The
resource
that
you're
consenting
it
against,
but
just
a
clarification
is
right.
Now
it
would
not
mean
if
you
you
could
consent
planner
API
into
that
particular
resource.
Specific
consent.
You'd
still
have
to
use
admin,
consent
and
group
rewrite
all
the
planner,
but
the
envisionment
over
time
is
that
resource
specific
consent
would
be
something
that
would
incur
all
of
the
graph
and
give
you
that
flow
across
teams
and
in
the
future
we
would
do
the
same
kind
of
scenario
for
deploying
an
SPF
X
web
part
into
SharePoint.
A
That
has
some
kind
of
resource,
specific
consent
that
didn't
require
kind
of
having
access
to
all
of
sharepoint.
You
can
just
have
it
for
wherever
the
sharepoint
SP.
The
next
thing
was
to
point
to
the
resource
it
was
deployed
to
so
as
we
get
further
down
and
we'll
certainly
kind
of
drive
that,
but
I'm
is
unfortunately
still
early
days.
A
Isn't
all
that
kind
of
all
the
boxes
checked,
and
then
teams
is
the
priority
kind
of
scenario
to
test
and
make
sure
we
get
that
right
when
it
comes
to
resource
with
city
consent,
and
it's
not
just
the
team's
problem.
No,
this
is
across
sharepoint
is
across
exchange,
and
so
we're
trying
to
work
out
the
right
way
to
do
this
so
that
you
don't
need
so
kind
of
have
the
kitchen
sink
to
do
something
within
a
particular
resource.