►
From YouTube: Maturity Model for Microsoft 365 - June 2023
Description
Recorded:
• June 20, 2023
Topic:
Maturity in the Age of Copilot
Agenda:
• Practitioner & Maturity Model Overview
• Artifacts and Updates
• Purpose
• How to get more involved
• Shout-out & Picture Time (Together Mode!)
Hosts:
• Marc Anderson, Sympraxis Consulting | @sympmarc
• Emily Mancini, Sympraxis Consulting | @eemancini
• Simon Doy, iThink 365 | @simondoy
• Simon Hudson, Novia Works | @simonjhudson
• Sharon Weaver, Smarter Consulting | @sharoneweaver
Resources:
• Maturity Model for Microsoft 365 - https://learn.microsoft.com/en-us/mic...
• Interested on joining these calls - Download the invite at https://aka.ms/mm4m365/invite
A
Of
course,
we're
going
to
give
you
an
update
on
the
artifacts
and
and
what
we've
been
up
to
we'll
talk
about
the
purpose
of
this
session
and
the
purpose
of
the
maturity
model.
We'll
then
go
into
together
mode,
hopefully,
and
do
a
shout
out
and
do
picture
time.
So
we
can
all
see
your
lovely
faces
and
get
you
involved
in
our
session
and
then
we'll
go
switch
to
how
you
can
contribute
and
and
support
the
maturity
model
and
then
go
to
the
main
event.
A
If
we
go
to
the
next
slide,
you
can
join
us
every
month.
I'll.
A
Okay,
so
join
us
every
month.
At
some
point
there
will
be
a
link
to
the
let's
see:
aks.ms
mm4
M365,
slash
invite.
So
you
can
download
a
reoccurring,
calendar
series
and
and
not
forget
about
our
sessions,
so
that'd
be
great
and
thank
you
for
those
that
have
come
today.
B
A
All
right,
I'll,
keep
talking
we'll
get
there.
I
will
keep
talking
so
on.
In
terms
of
you
know
the
approach
in
the
article,
so
this
maturity
model,
the
purpose
of
maturity
model
for
Microsoft
365,
is
to
help
organizations
and
help
us,
and
you
talk
to
key
decision
makers
about
what
the
purpose
of
Microsoft
365
is.
A
We've
got
common
competencies
which
all
businesses
need
to
be
able
to
do,
and
and
depending
on
the
competency,
there'll
be
some
areas
which
you
know
your
business
will
need
to
focus
more
than
others,
but
the
the
aim
of
it
is
to
be
able
to
kind
of
help.
You
come
up
with
the
benefits,
the
reasons
why
that
you
we
should
improve
the
likes
of
collaboration,
community,
cognitive
business
and
and
so
that
we're
able
to
bring
all
of
us
up
to
a
better
level.
We
can
work
on
interesting
projects.
A
We
can
help
our
businesses
be
more
successful
and
and
help.
You
know
drive
to
be
better
and
improve
our
maturity
in
these
areas
and,
of
course,
it's
a
community
effort.
So
we
want
people,
you
know
we'd
love,
to
get
involved
and
help
us.
You
know
inform
us
of
things
which
are
working
well
and
not
working
well
with
the
maturity
model,
where
we
can
improve
things.
We've
missed
out
on
so
we'd
love
you
to
to
help
us
and
get
involved
in
that
side.
A
So
recently,
we've
added
cognitive
business,
competency
and,
and
now
we've
sort
of
been
talking
about
co-pilot,
because
we
realize
that
some
of
these
competencies-
you
know-
we've
been
doing
this
for
for
a
few
years
now
that
they
need
to
be
updated
with
with
the
changes
and
co-pilot
is
going
to
be
a
a
big
change
and
a
big
shift
for
Brazil.
A
So
this
seemed
like
a
good
place
to
start
that
conversation,
foreign,
so
I'm,
moving
to
the
slide
with
the
maturity
model,
the
maturity
levels,
so
the
maturity
model
has
five
different
levels,
starting
at
level
100
to
the
level
500,
which
is
kind
of
in
the
Nirvana.
A
The
level
100
is
that
initial
default
state,
so
that
is
generally
the
state
if
you've
done
nothing
around
that
competency
and
then,
depending
on
on
that
competency,
particularly
with
Microsoft
365
there'll,
be
certain
things
which
kind
of
Microsoft
are
already
doing
for
us
to
help
us
maybe
get
to
a
a
level
past
100,
but
the
idea
of
level
100
is
sort
of
the
initial
State.
It's
ad
hoc,
it's
reactive,
it's
uncontrolled!
It's
chaotic!
It's
a
little
bit
like
this
presentation
this
time
and
then
moving
to
level
200
where
it's
becoming
more
managed.
A
You
know
we're
getting
some
routine
yeah,
we're
still
firefighting,
but
we're
on
we're
sort
of
starting
to
understand
the
challenges,
and
we
can
then
use
that
knowledge
and
understanding
to
get
to
level
300
where
it's
defined
and
we've
got
things
documented,
they're
planned,
they're,
controlled
they're,
not
you
know
we're
not
having
to
react
so
much
because
it's
been
more
thought
about
level.
400.
A
Is
this
predictable
level
where
we're
starting
to
measure
we're
starting
to
see
and
test
things
out
test
and
measure
we're
using
to
make
things
better
and
then
level
500
which
that
sort
of
Nirvana
we're
best
of
breed
we're
optimizing?
A
It's
it's!
The
competency
is
optimal.
It's
it's
being
improved!
It's
being
automated!
It's
really
helping
drive
that
business
forward.
A
Okay
and
then
we're
now.
Gonna
switch
over
to
PNP
picture
time,
so
we're
gonna
aim
to
turn
on
together
mode
and
for
those
who'd
like
to
be
involved.
We'd
love
you
to
be
involved.
If
you
could
switch
your
cameras
on.
A
B
B
C
A
So
so,
thank
you
is,
is
the
sharing
back
on
again
or
I.
A
Okay,
cool
all
right,
perfect,
so
moving
to
the
discussion,
you
know:
we'd
love
you
to
get
involved
and
contribute
in
723
and
you
can
do
get
involved
by
you
know
reacting
when
you
want
to
speak.
Please
come
off
mute
share
your
your
video,
so
we
can
see
you
if
you
would
just
rather
chat
and
and
suggest
things
in
the
background.
That's
absolutely
fantastic.
A
A
So,
as
we've
said,
you
know,
this
is
a
community
effort
and
we'd
love
to
get
you
involved.
So
we,
you
know
there
are
different
ways
of
getting
involved.
We'd
love
to
you
know.
If
you
had
a
case
study
about
how
you've
used
the
maturity
model
or
or
particular
aspects
or
competency,
we'd
love
you
to
come.
You
know
present
a
case
study
for
us.
You
can
contribute
on
GitHub
there's
various
discussions
always
going
on,
but
you
can
submit
pull
requests
and
create
issues
around
the
content
that
that's
out
there.
A
So
this
is,
you
know
your
maturity
model
to
help
us
you
know,
and
so
please
get
involved
as
well
just
socializing
the
content.
You
know
anything
that
you
found
useful
anything
which
you
think
we
should
be
talking
about
or
or
I
think
we
should
maybe
there's
an
issue
with
you
know
some
of
the
Articles
and
we
need
to
we've
missed
something
out.
A
It'd,
be
great
to
to
hear
your
thoughts,
there's
a
couple
of
links
at
the
bottom
them
so
there's
a
link
to
running
a
maturity
model
for
Microsoft
365
workshop
and
then
all
the
recordings
from
the
sessions
I
think
we're
up
to
date
now
are
are
available
on
that
link
and
on
YouTube
as
well.
So
here
we
thank
you
for
being
here.
B
Okay,
I've,
been
it
I've,
been
at
three
different
conferences
in
the
last
month,
and
it's
been
very
gratifying
to
hear
people
talking
about
the
maturity
model
in
just
many
many
many
different
contexts.
So
that's
that
socialized
content
idea
I
mean
we're.
The
the
goal
here
is
to
give
people
a
common
way
to
talk
about
things
and
and
sort
of
frame
things
up
about
how
you
can
get
better
at
using
the
platform,
and
you
know,
get
increase
your
capabilities.
B
So
all
of
these
conversations
you
know,
Agnes
Agnes
Molnar,
actually
did
a
whole
session
on
search
in
at
the
collab
Summit
in
Dusseldorf,
and
it
was
all
about
the
maturity
model
for
search.
So
it's
really
cool
when
we
see
this
out
there
in
the
wild.
It's
very
aggressive
super
cool.
A
Yeah
definitely
so
we,
this
is
our
so
moving
on
to
upcoming
topics.
So
today
we're
talking
about
maturity
in
the
age
of
copilot,
the
guest
speakers.
Well,
there
isn't
a
guest
speaker,
it's
all
of
us
with
our
points
of
view.
A
We're
gonna
be
taking
a
break
for
a
couple
of
months,
so
you
know
we
hope
you
have
great
summer
breaks
and
and
take
some
time
to
decompress
and
wind
down
and
enjoy
the
summer.
Hopefully
it'll
stay
sunny
in
the
UK.
It's
currently
pouring
down
with
rain
at
the
moment,
but
we'll
be
back
on
September
19th
talking
about
accessibility
and
we
will
come
up
with
a
guest
speaker
for
that
as
well.
A
So
moving
on
to
the
next
session,
as
I
said,
it's
going
to
be
September
the
19th
there
is
this.
This
Community
session
is
part
of
the
PMP
Microsoft
PMP
Community.
You
know,
and
there
are
a
number
of
set,
a
Hands-On
training
sessions.
So
you
know
if
it's
something
that
you're
not
really
sure
about
writing
Community
docs
or
you
want.
You
know
you
want
to
improve
the
way
that
you
write
or
contribute
onto
GitHub.
There
are
some
training
sessions
available.
A
There's
a
link
there,
AKA
dot,
Ms
slash,
sharing
hyphen
is
hyphen
caring.
So
please
click
on
that
to
to
sign
yourself
up
to
the
training
session,
they're
really
good.
A
It's
gave
me
a
lot
of
confidence
and
I
would
like
to
thank
all
and
I
would
like
to
thank
all
the
the
attendees
from
last
session.
So
thanks
for
everyone
for
coming
last
month,
okay,
right
so
on
to
the
main
event
maturity
in
the
age
of
co-pilot.
So
again,
please
get
involved,
react!
Let
us
know
what
you're
feeding
either
via
chat.
If
you'd
like
to
talk,
you
know,
come
come
off,
mute
and
come
talk
to
us.
Switch
on
the
camera
come
off
me.
A
Raise
your
hand
that
would
be
fantastic
and
I'm
going
to
hand
over
to
Simon
H.
If
that's
okay,
because
I
think
you
know
you
you're
heavily
involved
in
the
the
cognitive
business
session,
so
I
think
only
right
that
you
talk
introduce
this
topic.
D
So
I
was
thinking
about
the
impact
that
co-pilot
has
had
within
the
GitHub
space
and
I
mean
Simon.
Dory
is
more
qualified
to
talk
about
this
from
sort
of
Amazon
point
of
view.
I
just
I'm
interested
in
the
in
the
data
I
mean
Microsoft.
Do
some
pretty
good
research
from
their
mind
to
it.
So
I
was
happy
with
the
the
p-value,
which
is
a
very
statistically
significant
result.
D
I
thinking
when
I
looked
at
the
impact
that
co-part
within
some
developer
tasks,
I
want
to
stress
that
again,
these
AIS,
these
co-pilots,
aren't
going
to
make
all
activities
that
we
do
better,
but
they
have
the
ability
to
make
some
activities
a
lot
better
and
obviously
Microsoft
have
cherry
peaked
the
the
more
headliny
ones,
but
this
figure
of
taking
a
pass
from
two
hours,
41
minutes
on
average,
until
less
than
half
of
that
I
think
is
fascinating.
That's
a
220
percent
productivity
gain
and
I
I.
Guess
that
got
me
thinking.
D
What
would
it
look
like
to
you
and
your
organizations
if
for
quite
a
few
tasks
in
your
working
day,
your
staff
could
double
their
output
double
their
output?
That's
that's!
Getting
literally
twice
as
much
done
on
some
things,
I
think
that's
huge.
Most
companies
are
pretty
happy
if
they
can
make
a
five
percent
Improvement.
D
So
a
doubling
is
massive.
So
one
of
the
things
that
we
were
thinking
about
as
Microsoft
you
know
bit
by
bit
revealed
more
about
copilot
is,
is
what
does
this
mean
for
people
who
aren't
developers
but
are
still
doing
knowledge-based
tasks,
and
and
where
can
we
map
that
into
the
the
maturity
model?
The
competencies
that
we've
already
done
to
see
where
we,
where
might
we
start
saying
already
co-pilot-
can
do
this
this
and
this?
D
So
what
we're
going
to
do
is
we're
not
going
to
well
on
the
cognitive
business
competency
itself,
some
point:
we
will
update
that,
but
right
now
we
want
to
think
about
the
capabilities
which
have
been
announced
in
copilot,
and
what
could
that
do?
If
marks
have
come
through
with
it
all
in
terms
of
a
bunch
of
competencies,
now
Mark
Sharon,
Simon
and
I
have
picked
a
competency
each
and
give
them
some
thought
to
what
might
that
mean
at
each
level?
D
But
what
we
want
to
do
to
start
with
is
actually
get
you
guys
to
put
in
some
some
effort
for
us,
and
so
we've
got
a
couple
of
Microsoft
forms
to
do.
Some
quick
surveys
and
you'll
be
able
to
keep
running
those
throughout
this
session,
and
here
it
goes.
The
first
one
I
think
is
one
that
Simon
put
together.
Yes,
absolutely
so
this
one
is
about
which
of
the
competencies.
Do
you
think,
just
from
what
you
know
right
now,
do
you
think
might
be
most
impacted.
D
So
if
you
want
to
click
the
link,
is
that
clickable
it
is
clickable
hey,
so
you
can
click
the
link
and
open
the
form
or,
if
you're
not
able
to
click
the
link
for
some
obscure
or
cane
reason.
You
can
do
the
whole
QR
code
photograph
thing
and
we're
going
to
give
you
a
minute
to
that.
B
D
Yeah,
that's
the
one,
give
you
a
minute
to
have
a
go
at
putting
in
some
thoughts
into
that,
and
then
what
we're
going
to
do
a
little
bit
later
on
is
is
share
the
output
from
this.
D
B
D
B
D
Just
Google
moment
I'm
now
in
full-blown
multitasking
mode,
because
that
that
link
didn't
come
through
properly
as
a
link
I'm
just
going
to
try
that
again,
all
right
still
not
working
right.
Let's
bear
with
you,
I'm
just
going
to
type
it
in
manually,
so
that
people
don't
lose
it.
That's.
D
Right,
okay,
so
the
second
thing
we'd
like
to
do
is
the
competency
model
is
quite
big.
Now
there
are
many
many
competences
and
each
of
them
is
subdivided
down
into
their
various
levels
and
it's
a
big
job.
So
we've
had
to
go
identifying
some
things
that
co-pilot
might
dude
change,
add
or
you
know,
make
irrelevant
some
of
the
existing
characteristics
in
each
model.
But
what
I'd
like
you
to
give
us
your
thoughts?
So,
in
theory,
this
link,
which
does
work,
will
take
you
through
to
a
second
form.
D
You
can
fill
it
in
as
many
times
as
you
like
in
each
case,
pick
a
competency
pick
a
level
if
relevant
and
and
then
tell
us
anything
that
comes
to
mind
about
a
change
in
the
a
particular
competency
that
co-pilot
might
need
reflecting,
and
we
are
going
to
give
you
a
minute
on
that
and
again
keep
doing
that
all
the
way
through
this
session,
as
new
thoughts
come
up
because
the
more
ideas
we
get
from
everybody,
the
better
we
can
make
this
this
upgrade-
that
we'll
try
and
do
over
the
summer
period
and.
B
A
D
A
Cool,
so
we
had
a
thing,
you
know,
thought
didn't
we
and
you
know
we've
each
got
kind
of
an
area
which
we
we
really
the
competencies
which
we
particularly
like
so,
but
these
are
the
ones
we
picked.
We
tried
to
go
for
those
air
competencies
which
as
well,
which
we
think
people
kind
of,
might
start
their
Journey
on
too.
A
So
we've
picked
out
the
cognitive
business,
of
course,
I'm
a
you
know,
I'm
a
developer
really
so
customization
development
was
my
bag
Simon.
You
know
he
doesn't
like
to
communicate
likes
to
talk
so
he's
picked
out.
Communication
Mark
is
all
about
information
architecture.
A
Apparently,
if
you
might
have
mentioned
it
one
or
two
times
so
the
management
content
and
then
Sharon
has
picked
search
so
we'll
go
through
each
of
these
in
turn,
so
yeah
I.
B
Think
it's
fair
to
say
that
we're
all
extrapolating
a
bit,
because
you
know
we've
seen
demos,
we've
seen
the
marketing
material
about
co-pilot,
we'll
see
how
it
actually
plays
out.
The
good
thing
is
for
you,
Simon
D,
you
know
GitHub
copilot
actually
is
out
there
in
available.
So
yes,
we
know
what
that
looks
like.
We
know
what
that
that.
B
A
Yeah,
that's
right!
Well,
the
the
x
is
well.
The
x
is
out
as
well,
but
yeah
I've
got
access
to
that
as
yet,
but
yeah
it's
yeah.
It's
it's
pretty
cool.
D
Just
tell
us
for
those
that
haven't
sat
in
on
those
sessions.
A
So
so
cope
you
know
co-pilot
or
GitHub
copilot
is
you
know
using
you
know,
comments
to
be
able
to
start
getting
it
to
auto,
complete
your
code
and
generate
code.
The
copile
X
adds
some
additional
functionality,
but
one
of
the
ones
it
does
it
brings
in
chats.
So
you
can
chat
to
the
the
the
co-pilot
to
ask
it.
Questions
and
say
you
know
generate
me
some
unit
tests
around
this
section
of
code
or
look
at
this
function.
You
know
what
how
could
I
make
it
better?
A
So
it's
more
of
a
you
know
your
almost
pair
programming
with
your
with
with
your
co-pilot.
So
it's
it's
giving
you
a
lot
more
flexibility.
It
makes
it
easier
to
to
kind
of
to
try
things
out
and
that
kind
of
sits
similar
to
the
other
co-pilots
which
we've
seen
with
you
know,
SharePoint
copilot
and
word
co-pilot,
where
you've
you've
got
that
chat
interface
around
the
context
of
what
it
is
that
you're
you're
working
on
so
yeah.
A
D
I
was
going
to
not
really
say
too
much
about
this
yeah.
This
just
sets
the
scene
it
gives
us
gives
you
some
thinking
about
how
we're
going
to
approach
updating
the
the
cognitive
business
competency.
The
one
thing
I
did
want
to
do
was
suggest
to
people
that
they
should
already
be
thinking
about
what
their
AI
company
policy
is
going
to
be.
D
This
is
one
that
I
acquired
from
another
company
that
I'm
doing
some
work
with
via
Leslie,
actually
so
I
think
that
it's
a
it's
a
reasonable
starting
point,
but
I
suggest
to
people
that,
as
we
start
to
enter
the
world,
in
fact,
we've
already
got
one
firmly
in
the
world
of
AI
driven
cognitive
business.
We
need
to
make
sure
that
we've
reflected
that
in
our
policies
and
ideally
picking
up
some
of
the
responsible,
AI
Concepts
which
are
embedded
in
this
document,
so
obviously
the
slide
deck
will
be
available
later.
A
A
Yeah,
that's
right:
okay,
so
yeah
I
don't
have
such
a
big
screen,
so
I
apologize
that
some
of
the
text
is
quite
small
on
this
one.
But
if
we
you
know
talking
about
co-pilot
and
customization
development,
you
know
yeah
this.
This
does
seem
to
be
the
the
one
which
we've
got
the
most
information
about.
A
In
terms
of
you
know,
overall
outcomes.
The
idea
is
that
people
are
spending
less
time
on
on
the
practice
of
coding
and
being
able
to
spend
more
time
on
thinking
about
what
they're
building
and
you
know,
learning
to
kind
of
work
with
the
co-pilot,
the
prompt
to
refine
the
result
that
they're
getting
to
to
improve
the
the
code
that
they're
building.
A
If
we
sort
of
break
this
down
into
the
different
levels,
you
know
level
100.
You
know
we
we're
sort
of
saying
that
no
one's
really
using
co-pilot
for
development.
It's
not
really
been
thought
about.
It's
interesting,
my
organization,
you
know
I've
sort
of
prompted
the
developers.
A
Well,
you
don't
program
a
machine.
You
know
machine
code
anymore,
on
an
assembly.
You
know
you
were
using.
You
know,
tools
which
abstract
up
so
that
we
can
be
more
productive
and
co-pilot
is,
is
going
to
help
us
do
that.
You
know
I
think
what
we
do
need
to
do
is
unders.
You
know
understanding
the
limitations
of
copilot
and
at
level
200.
We
probably
don't
understand
what
those
limitations
are.
A
You
know
it
would
be
dangerous
for
us
to
go.
Oh
copilot's
written
that
brilliant
I'll.
Just
copy
and
paste
that
into
my
code
base
because
that'll
be
fine
it
that
that
process
of
reviewing
the
code-
and
you
know,
sort
of
validating
that
what
it's
created
makes
sense.
You
know
it
fits
in
our
understanding,
that's
something
which
people
at
level
200
just
you'll
will
start
learning,
so
we'll
have
their
limitations,
of
course,
and
it's
understanding,
there's
limitations,
but
then
we
come
into
level
300.
You
know
we
start
having
thoughts.
A
We've
got
some
standards
being
defined
about
how
co-pilots
used
you
know.
Developers
can
start
using
copilot
Vijay
functions.
Instead
of
you
know,
manage
common
code.
We
might
have
functions
being
developed
using
copilot
and
and
being
shared
internally
as
well.
This
is
really
important
that
we're
reviewing
the
code
that's
being
created
by
copilot
as
seen
as
being
essential,
which
I
think
is
great,
because
you
know
it
instills
code
reviews.
A
We
should
be
doing
them
anyway,
but
but
if
you
know
if
by
reviewing
the
code,
we
get
better
at
code
reviews
of
ruining
the
code
that
copilot's
creating
we
get
better
at
code
reviews,
then
then
that's
great
and
and
that
idea
of
pair
programming
is
such
a
that's.
Such
a
productivity
win
I,
you
know
in
when
we
we've
used
pair
programming
at
our
organization,
so
being
able
to
do
pair
program
with
co-pilot
to
kind
of
start
off
with
something
get
it
get
a
function.
A
That's
that's
achieving
something
be
able
to
refactor
that
and
and
keep
iterating
until
we
get
it
to
the
point
where
we're
happy
and
it
delivers
what
we
want
through
that
that
mechanism
that
that's
great
and
finally,
in
the
level
300
we'll
understand
really
about
what
limitations
of
copilot,
what
it
is
good
at,
what
it's
not
good
at
and
how
we
can
work
with
it
to
to
get
the
most
to
or
to
get
value
out
of,
I
won't
say
the
most
value
out
of
it,
but
to
get
value
out
of
it,
who
moved
from
Level
300
to
level
400
we're
starting
to
now
think
about.
A
A
This
idea
of
you
know
how
you
start
working
with
AI,
where
you,
you
start
trying
things
out
and
then
you
start
trying
out,
prompts
and
phrases
to
get
what
you
want,
and
then
you
refine
those
to
get
the
result
and
eventually
you
might
end
up
having
a
fine-tuned
model
which
gives
you
the
results,
but
the
idea
that
you're
you're
trying
these
things
out
and
if
we
can
start.
If
we
understand
what
we
can
get
from
copilot,
then
we
can.
Then
you
incorporate
that
and
to
improve
our
estimates
and
and
speed
up
delivery.
A
Also
in
level
400
we're
going
to
be
measuring
the
usage,
and
you
know
ideally,
you
know
we
should
be
recording
where
it's
being
used
as
well.
So
we
can
see
what
impact
copilot
is
having
on
those
projects
where
copolo's
being
used
and
level
500
now
we're
starting
to
see
in
it
see
this
is
you
know
like
this
is
something
we
just
kind
of
sort
of
thinking
through
is
you
know
we
might
start
having
a
co-pilot
which
is
very
specific
to
our
business,
and
it's
been
finely
tuned
so
that
it's
got.
A
You
know,
company
specific
domain
knowledge
understands
our
code
repositories,
so
it
can
produce
more
specific
code
bases
and
functions
for
for
our
organization,
and
we
are
starting
to
use
a
level
500
using
code
in
our
internal
repos
to
to
build
off
to
allow
co-pilot
to
build
those
those
codes
and
those
modules
which
can
then
start
to
be
optimized,
so
we're
starting
looking
at
what
we're
building
and
optimizing
our
internal
repos
with
copilot
and
then
the
last
part
is
that
we've
got
some
feedback
loops
and
we're
using
those
to
test
and
measure
and
improve
those
AI
models
and,
what's
being
and
being
generated,
the
level
500
yeah
sort
of
kind
of
trying
to
push
the
boundaries
of
of
where
we
think
the
customizing.
D
Posted
a
link,
the
the
folk
behind
open
AI,
have
got
some
really
good
bosses.
I've
posted
a
link
to
the
prompt
engineering
one.
The
reason
I've
chosen
that
one
in
particular
is
is
there's
an
article
I'm
about
to
publish
on
the
ethics,
and
my
article
is
actually
about.
There
is
no
ethics
in
an
AI.
All
of
it
is
done
by
putting
some
guard
rails
coding.
D
What
people
are
allowed
to
ask
and
coding
for
checking
what
the
AI
gives
back
as
results,
and
so
when
people
are
looking
at
doing
customization
and
development
with
AI
at
the
center
I
think
it's
important
that
you
look
to
doing
that
responsible,
AI
piece
and
making
sure
you
do
checks
on
the
input
and
output
from
the
AI,
because
the
AI
will
will
give
you
unacceptable
answers
in
a
few
cases
that
you
need
to
filter
out,
because
it
has
no
sense
of
morality.
D
B
I
I
I'd,
say
I'm
squarely
at
level
200
with
with
GitHub
co-pilot
I've
tried
it
out
a
little
bit
and
and
just
with
Powershell
at
Todd
Clint's
suggestion
and
I've
gotten
some
results
that
don't
work
I've
gotten
some
results.
That
save
me
time
to
for
me,
though,
I'm
asking
it
questions
asking
it
for
things
that
I
don't
know
how
to
do.
B
That's
my
test
and
and
that's
a
different
test,
then
here's
my
code
make
it
better
but
I'm
finding
that
it
can
be
very
helpful
instead
of
you,
know,
bingling
something
I
can
ask
copilot,
and
it
gives
me
a
very
focused
answer
right.
Sometimes
when
you're
searching
for
something
it
takes
quite
a
bit
of
time
to
find
that
chunk
of
code.
That
isn't
quite
the
answer
and
you
need
to
customize
and
it
to
me.
That's
where
I
am
you
know?
Oh,
this
looks
sort
of
right,
I'm,
gonna,
I'm
gonna.
Try
that
out.
D
Because
I've
also
used
it
for
pulling
together
things
like
Powershell
scripts,
now
I
can't
write
Powershell,
so
the
danger
is
that
I
could
run
a
Powershell
skirt
provided
to
me
by
an
AI
and
actually
I,
wouldn't
have
a
very
good
idea
of
what
it
was
going
to
do,
because
I'm
not
qualified
to
review
that
code
to
see
what
it's
you
know,
whether
it's
right
or
not,
when
you
run
out
well,
hopefully,
if
you
run
a
piece
of
code,
you'll
find
out
one
way
or
another,
whether
it
works
or
not,
the
danger
is
when
we
start
taking
it
outside
of
the
more
objective
spaces
like
development
yeah.
D
C
B
Well
to
me
it's
no
different
than
than
finding
some
code
out
there
on
the
web
and
just
running
it
without
figuring
out
what
it
means.
I
mean:
we've
been
telling
people
for
years.
Don't
do
that
some
people
still
do
you
know
here's
a
here's,
a
script,
for
example
a
Powershell
script
that
is
supposed
to
do
what
it
says
in
the
blog
post
and
then
you
run
it
and
you
find
out.
B
Oh
I
just
deleted
my
document,
library
or
whatever
you
know
it
does
something
different
yep,
and
so
you
know,
there's
always
a
responsibility
when
you're
looking
at
anyone
else's
code,
even
the
person
who's
sitting.
Next
to
you
at
work
to
be
sure
that
it
does
what
it's
supposed
to
do
and
I
think
that's
that's
an
onus
on
every
developer,
it's
pair
programming
in
a
sense,
not
you
know,
do
it
for
me.
D
D
We
see
if
we
can
do
that
so
so
this
is
just
my
first
take
at
it.
I'm
sure
that
I'll
have
more
ideas
as
I
go
along
on
the
right
of
my
slide.
I've
just
put
down
some
thoughts
on
some
of
the
kind
of
things
that
co-pilot
might
be
able
to,
or
is
intended
to
be
able
to
help
with.
When
we
come
to
doing
communication
about
to
you,
you
can
look
at
those
yourself.
D
The
upsides
of
doing
it,
I
think,
is
that
most
organizations,
especially
are
smaller
ones.
Without
dedicated
comms
departments
struggle
with
creating
frequent
Timely,
you
know
accurate,
engaging
content,
we
kind
of
go.
Oh,
we
really
need
to
get
our
staff
to
know
about
this,
we'll
throw
it
out
there
and
yeah.
D
At
the
same
time,
I
was
reading
the
the
work
Trend
index
article
that
Microsoft
put
out
I
will
just
grab
a
link
to
that
now
and
stick
that
in
the
chat
just
this
morning
and
there's
a
really
interesting
statistic
in
that
about
how
much
more
successful
corporations
companies
are
where
they've
got
an
effectively
engaged
Workforce
and
some
of
that
engagement
comes
from
the
level
of
communication.
So
I've
put
the
link
in
have
a
read
of
it
yourself,
but
I.
D
Think
communication
is,
is
a
really
important
part
of
of
of
driving
the
messages
and
the
culture
and
the
goals
across
an
organization.
D
If
we
do
it
right
by
using
copilot
that'll
help,
everybody
and
I
think
the
other
thing
that
I
worry
about
a
lot
with
communication
is
too
much
of
it
is,
is
as
bad
as
too
little
and
co-pilot
will
not
only
be
able
to
look
at
making
sure
that
the
right
people
get
the
right
messages,
but
maybe
personalizing
it
for
them
as
well,
so
making
making
it
appropriate
to
the
people,
not
just
lots
and
lots
of
blanket
messages
like
we
get
off
Facebook
and
every
other
Damn
Channel
at
the
moment
right.
D
So
my
my
levels
have
always
level
100
is
yeah,
we're
just
not
doing
it
and
and
we're
in
as
appalling
all
as
good
a
place
as
we
are
today,
as
we
move
up
into
level,
200
I
think
we'll
find
that
people
are
using
copilot
and
maybe
other
tools
as
well
to
drive
their
messages.
It's
really
quite
alluring
for
doing
that,
because
it
seems
so
simple.
The
issue
is,
as
I
mentioned
before.
If
you
don't
really
think
about
it,
you
may
get
well.
You
know
scary
communication
stuff,
which
is
inaccurate.
D
If
you
don't
know,
there's
this
concept
of
temperature
in
AIS
that
I
only
really
learned
about
a
couple
of
weeks
ago,
when
we
were
all
over
at
Dusseldorf,
so
temperature
sort
of
roughly
maps
to
those
three
options.
You
see
in
bing,
where
you've
got
the
creative
mode,
the
sort
of
standard
mode
and
the
precise
mode,
and
if
you
get
your
temperature
wrong,
you're,
giving
the
AI
license
to
make
stuff
up
and
sometimes
making
stuff
up
is
really
good.
That's
what
creativity
is
about.
D
You
know
Einstein
once
said:
if
we
knew
what
we
were
doing,
it
wouldn't
be
research,
and
so
you
know
there's
a
time
for
that.
But
if
you
need
to
make
sure
your
stuff
is
accurate,
make
sure
you
get
your
temperature
settings
right
and
make
sure
that
you've
done
your
review
and
we
tend
to
see
that
stuff
appearing
at
level
300.
D
There
will
be
policies
in
place
at
300
describing
how
you're
going
to
use
these
AI
tools
to
drive
your
communication.
What
the
checking
the
quality
measures
are
going
to
look
like
again,
we've
wrapped
in
some
responsible,
AI
thinking,
there's
some
really
basic
principles:
I'll
grab
those
in
a
minute
and
just
post
those
into
chat
about
what
responsible,
AI
looks
like
and
making
sure
that
we've
embedded
those
in
the
way
we
use
Ai
and
in
the
communications
that
we
push
out
and
I.
D
Think
I
mentioned
that
the
temperature
thing
to
make
sure
that
you've,
you
know
you've
not
got
unintended
errors
in
the
in
the
content.
Did
we
move
up
to
level
400
yeah?
The
advanced
stuff
is
when
we
start
to
actively
rewrite
repurpose
the
content.
You
know
the
messages
according
to
the
channels,
if
I'm
pushing
something
out
on
a
you
know,
a
social
media
channel.
D
That
message
might
be
very
short
and
compact,
with
a
different
kind
of
graphical
style
and
writing
tone
compared
with
a
Blog,
I
publish
internally
or
a
Blog,
I,
publish
externally
and
I
think
the
level
400
will
be
able
to
get
co-pilot
to
help
us
rewrite
out
or
repurpose
and
rewrite
their
content
according
to
the
different
Target
audiences
and
the
different
delivery
channels
that
we're
doing
and
how
it
Source
references
and
Associated
information
and
imagery
and
all
that
kind
of
stuff
and
then
again
level.
D
D
I
literally
have
my
company
culture
business
principles
up
here,
so
I
want
to
be
able
to
teach
my
AI
that
and
and
that's
what
I'd
like
to
be
able
to
see
it
at
level
500
and
as
I
said
level,
500
personalizing
messages.
So
maybe
everybody
gets
their
own
personal
message,
not
just
a
generic
one.
D
Maybe
we
can
then
start
asking
questions
after
the
item
has
been
published
and
co-pilot
can
answer
those
questions
for
us,
rather
than
just
dragging
in
members
of
the
com
team
or
the
exec
or
whoever
it
is
and
searching
from
a
I'll
publish
it
when
I
like
to
co-pilot
looking
at
when
there
are
Communications
to
be
sent
because
it
has
insights
into
what's
going
on
in
the
organization,
and
it
will
have
maybe
even
some
degree
of
autonomy
to
say.
I
think
that
this
is
something
which
is
noteworthy.
D
B
We
seem
to
be,
we
seem
to
be
generating
a
lot
of
good
conversation
in
chat,
which
is
excellent,
exactly
see
that
well
we'll
take
all
that
and
paste
it
into
copilot
and
get
a
summary.
A
Food
For
Thought
in
the
maturity
model,
though
as
well.
You
know
to
be
thinking
about
some
of
the
topics
which
is
you
know
talking
about
GitHub
co-pilot,
you
know.
Is
the
code
that's
been
written
by
GitHub
copilot,
you
know.
Is
it
who's?
Who
does
it
belong
to?
That's
where
a
lot
of
the
discussion
seems
to
be
going
around
yeah.
D
Well,
that
was
that
was
never
very
clear.
I
mean
GitHub
is,
is
more
complex,
but
again
as
we
start
to
move
this
stuff
into
using,
you
know,
zero-based
repository
rather
than
an
open
repository,
then
that
that
concern
has
already
been
formally
addressed.
If
it
runs
it
yeah,
it's
your
stuff,
Microsoft
will
have
no
access
to
it,
so
so
yeah.
So
those
are
your
choices.
You
know,
use
the
open
source
stuff,
if
you
don't
mind
and
use
the
closed
stuff.
If
you
want
to
make
sure
your
IP
is
protected
right.
E
So
my
question
might
be
addressed
in
the
next
slide,
but
I
noticed
earlier
in
the
the
summary
where
we
stuck.
We
had
all
that
stack
of
competencies.
There
wasn't
anything
there
for
collaboration
and
what
what
we've
just
been
speaking
about
on
the
communication
slide,
there's
sort
of
quite
a
heavy
slant
towards
corporate
comms.
E
You
know
message:
the
company
wants
you
to
have,
as
opposed
to
the
the
one-on-one
comms
I
have
with
my
colleagues
in
the
shape
of
teams,
messages,
Channel
messages,
emails
and
so
forth
that
that
greases,
the
wheels
of
a
collaborative
process
like
getting
a
proposal
ready
so
I,
don't
know
if
there's
even
capability
in
copilot,
that's
going
to
address
collaboration
scenarios,
I
thought
I
just
put
a
question
out
there,
but
I'll
wait
until
the
next
slide
is
done
in
case.
It
answers
the
question.
B
Well,
we
we
do
have
a
collaboration
competency
in
the
maturity
model.
I
I've
brought
back
to
the
to
this
pile
of
of
competencies.
So
it
we
have
the
competency
when
we
when
we
were
thinking
about
what
the
impact
of
co-pilot
might
be
on
the
existing
competencies.
We
all
chose
the
green
ones
just
because
we
wanted
to
talk
about
them
and
also
we
thought
it
was
going
to
be
a
place
where
co-pilot
would
have
a
a
newer
term
impact.
If
that's
right
or
not,
don't
know,
I,
think
I.
B
Think
individual
communication
between
individuals
is
likely
to
be
impacted
a
bit
less
just
because
the
way
we
communicate
and
be
in
you
know,
in
chat,
for
example
in
in
teams,
is,
is
very
on
the
Fly
I
think
when
it
comes
to
you
know
more
formalized
collaboration,
you're,
it's
a
fair
question,
but
one
we're
not
really
addressing
today.
Yeah.
We.
A
We
did
we
did,
we
certainly
did
discuss
you
know
collaboration.
Should
we
do
that
and
it
was
it
was
like
communication
or
collaboration
and
everyone
with
communication,
but
yeah
there's
definitely
lots
of
great
uses.
You
know
word
co-pilot,
PowerPoint
co-pilot,
that's
going
to
be
great
for
collaboration
anyway,.
B
E
Because
I
think
in
those
just
those
two
examples
you
mentioned,
that
sort
of
maybe
IT
addresses
the
the
the
task.
I'm
doing
is
something
I'm
kind
of
doing
by
myself
and
the
collaborative
element
will
come
in
when
I
ask
a
colleague
to
peer
review
or
maybe
comment
or
contributes
and
and
I,
don't
I,
don't
know
anything
about
co-pilot
being
able
to
do.
E
D
E
Maybe
when
we
see
that
kind
of
capability
evolving,
if
ever
then
then
that
sort
of
addresses
the
collaboration
competency
a
bit
more
directly,
yeah.
A
It
might
I
think
it
might
be.
You
know
that
you're
using
co-pilot
to
do
that
initial
review
before
you
then
share
it
out
with
you
know
a
wider
audience.
So
you
kind
of
you
know
you,
you
may
be
getting
that
that
content
or
the
you
know
that
content
you're,
creating
and
collaborating
on
you're
collaborating
with
the
co-pilot
and
then
you're,
then
passing
that
on
for
for
with
the
rest
of
the
team
to
to
get
further
input
from.
B
I
could
see
I
could
see
co-pilot
playing
a
role
in
something
like
a
weekly
status
update.
You
know
here
here
are
all
the
things
that
we've
done
this
week,
throw
it
into
copilot.
Have
it
tell
us
what
happened
yeah?
You
know
that
that's
that's
both
a
piece
of
content,
but
also
a
collaboration
tool
right,
we're
communicating
what
we're
working
on
and
how
we're
doing
against
those
tasks.
B
So
I,
don't
think
we've
seen
I
haven't
seen
a
demo
or
anything
in
that
sort
of
context,
but
I
could
see
it
working
I
mean
I,
think
I
think
that's
where
we'd
have
to
be
able
to
to
train
the
model
using
our
content
as
opposed
to
that
external
current.
You
know,
currently,
the
external
sort
of
the
world's
Corpus
and
I
think
I
think
it
does
play
into
what
I
wanted
a
couple
of
the
points
that
I
wanted
to
make
here
about
management
of
content.
B
If,
if
you
don't
mind,
may
I
may
I
go
along
I.
Just
noticed
we're
running
out
of
time,
so
so
I
didn't
I
didn't
do
a
very
good
job
on
this
slide.
I
gotta
admit
I've
been
thinking
about
it.
As
we've
been
talking,
you
know,
I
think
I,
think
it.
It's
reminding
me
a
lot
of
the
discussion
we
had
with
it
Pros
when
we
were
talking
about
going
to
the
cloud.
B
You
know
it's
it's
taking
away
some
of
the
more
mundane
parts
of
a
process
and
allowing
you
to
focus
on
the
higher
value
added
parts.
So
in
a
management
of
content
perspective,
if
you're,
if
you're
generating,
for
instance,
a
policy
about
I,
don't
know
bring
your
own
device
you're,
bringing
your
own
device
policy
for
the
inside
of
an
organization,
you
know
you're
you,
you
could
certainly
ask
copilot
to
generate
some
some,
a
starting
point.
B
You
know
you
could
you
could
ask
it
for
for
some
suggestions
as
to
what
that
policy
should
include,
but
then,
but
then
I
think
you
know
you
still
need
to
do
the
work
to
make
it
work
for
your
organization.
So
so
so
management
of
content
is
in
in
my
mind,
similar
to
what
we
what
Simon
talked
about
with
with
communication.
You
know
we
can.
B
We
can
use
it
as
a
tool
as
as
in
every
other
example,
but
it's
a
tool
that
lets
us
focus
on
the
parts
of
the
content
that
are
more
important
to
our
organization
and
what
make
us
special
so
going
through
the
levels
you
know
level
100,
no
one's
using
copilot
that
one's
always
an
easy
one,
I
think
at
level
200
we're
starting
to
use
copilot
to
generate
some
initial
passes
at
things
like
that
policy,
like
you
know,
a
a
status
update,
maybe
we
give
it
some
facts
and
then
ask
it
to
write
that
into
it
into
a
couple
of
paragraphs
for
us
level,
level,
200
or
sorry
level,
300
we're
starting
to
understand.
B
Well,
how
are
we
using
copilot?
What
are
the?
What
are,
what
are
the
right
and
wrong
ways
to
use
it
I,
don't
think
we
should
be
using
it
to,
for
example,
write
an
entire
policy
and
then
just
say:
okay,
great
I'm
done
I
think
that
if
that's
not
a
rule
in
your
organization
in
your
governance,
about
about
these
tools,
I
think
it
really
ought
to
be
I.
Think
I
think
also
we'll
have
to
from
a
governance
perspective.
B
Come
up
with
a
way
to
either
identify
or
choose
not
to
identify
content
that
is
generated
by
co-pilot
as
being
such,
you
know
what
does
it
mean
if
I'm
a
content,
creator
and
I'm
generating
a
lot
of
my
content
using
an
AI
tool?
Well,
it's
it's,
probably
not
entirely
different
than
gee
I.
Don't
know
how
to
write
this
I'll
get
sick
with
the
policy
idea
and
just
going
out
and
searching
and
grab
adding
paragraphs
from
various
things
or
or
grabbing
a
boilerplate
contract,
for
example
from
LegalZoom.
B
You
know
there
are
a
lot
of
sites
out
there
that
give
you
examples
and
and
I
think
that's
what
that's
what
copilot
might
be
doing
for
us,
but
we
need
to
think
about
well
when,
when
do
we
need
to
say
that
that's
what
this
is
within
our
organization,
what
types
of
content,
meaning
content
types
I
had
to
say,
content
type
somewhere?
B
B
What's
in
a
document
to
use
as
sort
of
abstracts
right,
oftentimes
in
as
as
we're
building
intranets,
for
example,
there
might
be
a
a
a
document
based
version
of
something
that's
important
of
a
certain
type
of
content,
but
we
might
want
to
have
an
abstract,
a
short
version
of
that
that
people
can
skim
and
see.
If
this
is
the
document,
they're
actually
looking
for
I
could
see
that
that
sort
of
being
done
on
an
ad
hoc
basis
at
level,
300.,
I
think
on
level
400.
B
We
might
start
to
see
that
we're
using
a
programmatic
way
to
say:
okay,
let's,
let's
reason
over
everything
in
this
particular
document
library
and
generate
generate
those
abstracts
or
summaries.
Let's,
let's
make
sure
that
we
have.
You
know
a
flow
every
time,
somebody
uploads
something
that
that
analyzes
that
content
and
makes
some
sort
of
addition
or
or
or
change
to
it
and
then
I
think
at
level
500.
Maybe
it's
level
400,
but
somewhere
in
there.
We
want
to
make
sure
that
we're
we're
training
co-pilot
with
our
organizational
standards
for
content.
B
Whenever
we
ask
copilot
something
in
in
bing
today,
which
is
really
where
it's
available
most
for
most
of
us,
if
we
can
access
it
or
you
know
some
variant
of
it-
the
chat,
GPT
kind
of
thing
whenever
we
ask
it
for
something
it's
drawing
on
that
outside
Corpus,
it's
it's
not
our
content
and
our
content
is
likely
to
have
some
quirks
and
differences
because
it's
our
content.
So
we
want
to
make
sure
that
the
the
co-pilot
is
trained
somehow
with
that
or
our
organizational
standards
on
an
ongoing
basis,
because
our
organizational
standards
would
change.
B
So
maybe
it
understands
templates
that
we
have
defined
for
Content.
You
know
the
policy
looks
like
this,
then
co-pilot
needs
knows
to
respond
to
any
prompts
in
this
framework.
It
understands
the
templates.
It
understands
the
metadata
we
need,
and
it
understands
the
the
sort
of
tone
that
we
tend
to
use
for
that
particular
content
type
and
it
and
it
gives
us
results
that
are
like
that
and
I.
B
Some
things
have
to
be
really
created
from
Whole
cloth
by
a
human
being,
and
some
things
can
be
somewhat
synthesized
from
other
content
that
exists
within
our
organization,
but
in
in
all
these
cases,
I
really
think
we're
focusing
on
you
know.
How
can
we
use
the
tool
to
to
let
people
do
the
higher
value
added
stuff
that
they
do
as
opposed
to
the
the
more
rote
stuff
I?
Think
that's
where
the
values
go
and
Sharon
I've
left
you
an
entire
couple
of
minutes.
That's
okay!.
C
That's
I
mean
it's
kind
of
interesting
because
we've
been,
you
know
same
thing
like,
as
we've
been
going
through
this
and
talking
about
different
things
and
I
I
picked
search
to
talk
about
just
because
I
think
search
is
going
to
be.
C
You
know,
from
from
a
everyday
person
perspective
I
think
it's
going
to
be
one
of
the
most
impactful
and
at
the
end
of
the
day
the
reason
is
because
and
I
put
these
top
two
things
at
as
my
kind
of
outcomes,
I
think,
first
and
foremost,
people
will
be
more
comfortable
searching
using
natural
language
because
at
the
end
of
the
day,
any
age
person
with
any
Tech
background
with
any
you
know,
language
skills
can
simply
say
I'm.
Looking
for
this
thing,
I
want
to
do
this
thing.
C
They
then
go
back
and
use
other
things,
so
it's
actually
doubling
their
work,
tripling
their
work,
maybe
giving
them
the
wrong
responses
or
incorrect
or
inaccurate
information,
and
so
it's
actually
causing
more
damage
than
it
is
doing
good.
So
it's
kind
of
like
this.
You
know
they're
creating
little
fires
everywhere
by
the
time
you
get
up
to
200,
now
we're
starting
to
be
like.
Okay,
maybe
we're
figuring
this
out.
We've
got
a
few
basic
scripts
we
can
use.
C
C
We
just
know
that
it
works,
and
so
it
gives
us
what
we
want
so
we're
using
this
kind
of
basic
things
that
we've
learned,
but
you
know
maybe
we're
randomly
checking
the
results,
we're
getting,
but
we're
not
really
holding
it
to
any
standard,
and
you
know
Simon
and
I
had
a
conversation
the
other
day
about
like
getting
information
of
it,
but
it
being
inaccurate
and
how
that
can
be
damaging
and,
as
you
go
to
these
higher
levels,
I
think
what
ends
up
happening
in
any
higher
mature
organization
is
that
you're
starting
to
define
those
standards
you're
starting
to
audit
you're,
starting
to
figure
things
out
by
the
time
you
get
to
that
500
you're,
starting
to
be
more
Innovative
and
saying.
C
How
can
this
do
things
for
us
that
maybe
we
couldn't
do
for
ourselves
before
or
maybe
that
are
going
to
set
us
apart
at
a
business
of
a
differentiator
level?
By
using
this
technology?
I
know
it's
kind
of
I'll,
summarize
mine,
but
I
I
think
search
is
going
to
be
a
big
deal
for
people
when
they
can
get
onto
things
say
what
they
want
and
ultimately,
maybe
get
one
or
two
very
accurate
results
back.
B
Yeah
I
mean
the
even
just
having
a
search
engine
20
years
ago
would
have
seemed
miraculous
to
us
right,
maybe
25
years
ago.
I
don't
know
time
flies
when
you're
having
fun
and-
and
this
is
an
evolution
of
search
right.
It's
it.
We've
had
natural
language
types
of
search
back
in
Lotus
one
two
three,
they
tried
remember
that
how
thing
anybody
remember
how
it
worked
okay-ish,
it
did
things,
but
now
we're
way
beyond
that.
D
One
yeah
probably
down
to
me
again
we'll
keep
it
quick
because
we're
at
time
just
some
things
to
Ponder
I
mean
this
pace
of
change
is,
is
crazier
in
in
40,
odd
years
of
being
in
this
techish
kind
of
industry.
I
have
never
seen
something
moving
as
fast
as
the
AI
World,
so
it'll
change
by
this
time
next
week.
D
But
in
the
meantime,
if
you've
got
any
more
ideas
of
things
that
we
should
be
feeding
into
the
competencies,
please
add
them
to
that
second
form
and
we'll
we'll
do
something
with
it
and
if
you've
got
the
time,
have
a
play
with
Bing
chat,
because
that's
GPT
with
access
to
yesterday's
data
and
with
designer
which
is
Dali,
but
for
people
who
want
to
do
marketing
comms
have
a
play
in
the
AI
playground
because
that's
fun
and
then,
as
I've
said,
think
about
your
IO
policy.
D
A
D
A
B
Right,
we
are
over
time
if
you've
attended
our
calls
before
you
know
all
of
these
links
you
can
get
to
the
videos
from
these
calls
and
the
slide
decks.
Those
are
probably
the
two
most
frequently
used
links
there.
Our
next
call
will
be
on
September
19th.
We
are
taking
the
next
couple
months
off,
I,
don't
have
the
topic
here,
but
it
will
either
be
accessibility
or
data
and
analytics
we've
got
a
couple
conversations
going
on
around
both
of
those
things.
Thank
you
all
for
joining
and
we
hope
to
see
you
in
September.