►
Description
Pedro Moreira da Silva, Staff Product Designer, shares how he's thinking about breaking down the AI UX framework, with its lenses and patterns.
https://www.figma.com/file/s4TP1i2Akd1VTh4jhbg234/AI-prioritized-prototypes?node-id=2786-84372&t=Xm0nhGsxdsrk248r-4
https://gitlab.com/groups/gitlab-org/-/epics/10334
A
Hey
everyone:
my
name
is
Peter
Meredith,
Silva
and
I'm,
a
Staff
product
designer
working
on
the
AI
integration
initiative
and
I'm
going
to
share
with
you
what
I've
been
thinking
for
the
uix
framework
of
this
AI
effort,
the
different
lenses
of
how
I'm,
seeing
this
and
some
of
the
ways
that
I
think
we
can
connect
the
experiences
and
the
patterns
that
make
this
experience.
Both
the
patterns
that
are
currently
being
explored,
as
well
as
some
of
the
patterns
that
I'm
predicting.
We
will
need
to
tackle
soon.
A
So
first,
let's
look
at
the
different
lenses.
The
first
one
scale
so
for
this
three
levels
of
scale
focused,
supportive
and
integrated,
so
focus
is
the
most
is
the
one
that
puts
the
AI
in
the
foregrounds.
The
most
and
integrated
is
the
opposite.
So
it's
when
the
AI
is
integrated
into
the
current
context.
It
takes
a
back
seat
and
it's
more
in
the
background,
still
foreground,
but
more
integrated.
As
the
word
says,
I'll
show
you
in
a
bit
how
that
can
be
potentially
represented.
A
Then
we
have
the
approach
so
different
ways
that
teams
can
approach
the
the
problems.
A
We
also
then
have
interactivity
so
how
you
interact
with
AI
and
right
now,
basically,
two
ways
proactive
and
reactive,
proactive
being
meaning
that
it
doesn't
have
any
user
interaction
for
the
AI
to
outputs
content
or
perform
actions
and
reactive
is
based
on
user
interaction,
and
we
will
see
how
that
of
these
are
represented
next
and
the
last
two
problem
and
effort
is
what
the
teams
have
been
mostly
focused
on,
and
that
has
to
do
with
the
problem.
A
There
are
many
categories
that
we
could
put
here
and
have
different
names
across
the
industry,
but
to
simplify
I've
narrowed
down
to
just
these
three
categories.
So
classification
is
about
taking
categorical
data,
and
that
is
not
continuous
and
doing.
Tasks
like
categorizing,
suggesting
ranking
and
matching
generation
is
about
summarizing,
explaining
or
creating
content
prediction,
which
is
also
called
regression,
is
about
forecasting
continuous
non-categorical
data,
so
things
that
have
to
do
with
with
numbered
values
and
things
that
are
not
categorical,
as
the
word
says,
and
then.
A
Finally,
we
have
the
effort
about
implementation,
three
levels
that
have
already
been
described
in
the
beginning
of
this
effort-
and
it
has
mostly
to
do
with
the
implementation
so
being
an
ad
hoc
task,
which
is
an
immediate
API
response.
Then
we
have
Next
Level,
which
requires
more
work,
which
is
to
pre-process
and
fine-tune
the
data
and
the
model,
and
then
finally,
we
have
custom
models
and
custom
implementations
that
need
to
be
built
and
that
we
cannot
use
in
API
for
responses.
So
how
all
of
this?
A
How
all
of
these
lenses
come
into
effect
in
the
framework?
So
this
is
what
I'm,
seeing
right
now
and
what
I'd,
like
your
feedback
on,
probably
here
in
in
figma,
to
keep
it
as
quick
as
possible.
A
So
here
I
have
three
columns
with
some
Lo-Fi
mock-ups
and
again
this
may
or
may
not
be
how
we
represents
the
user
interface.
I
don't
want
to
over
prescribe,
and
so
that's
why
all
of
these
are
very
low
Fidelity.
So
the
first
one
is
focused
or
actually
let
me
start
the
other
way
around.
Let's
start
with
integrated,
so
or
actually
since
I'm
improvising.
This,
let's
start
with
the
middle
supportive.
So
supportive
is
what
people
are
already
familiar
with
the
chat
interface.
A
What
everyone
talks
about
and
the
chat
interface
as
it
says
here
in
the
summary,
is
a
single
thread
which
is
aware
of
the
context
and
may
allow
potentially
in
the
future,
to
navigate
and
create
different
threads.
And
it's
a
multi-turn
conversation
where
you
can
ask
questions
to
the
AI.
You
can
make
requests,
and
all
of
this
happens
in
this
chat
interface.
A
So
here
the
approach
is
more
about
augmenting
the
capabilities
of
humans
by
providing
them
with
more
information
or
information
in
a
format
that
is
actionable
by
them,
and
it
makes
them
understand
what
is
happening.
It
is
also
an
opportunity
to
suggest
automations
and
also
suggest
navigation,
so
we
can
have
the
AI
at
a
certain
point
suggests:
hey.
Do
you
want
me
to
do
this
for
you
since
we're
talking
about
this
or
they
can
suggest
hey?
You
should
try
navigating
to
this
page
or
this
section
of
the
platform
to
perform
this.
A
It
covers
basically
all
problems
in
terms
of
interactivity,
it's
reactive.
So
my
idea
here
is
that
this
should
only
be
summoned
by
the
user
with
their
awareness.
It
should
not
pop
in
out
of
nowhere
like
clippy
and
and
then
yeah.
A
We
can
talk
about
the
transitions
later,
so
here's
an
example
of
how
this
can
come
into
effect,
so
we
have
a
button
which,
in
this
case
is
very
clearly
says,
help
me
fill
this
form
and
you
click
on
this
and
with
with
the
ability
of
having
this
support
on
the
side
with
AI,
you
chat
with
it
and
it
helps
you
fill
out
the
forms
in
the
form
Fields
here,
and
this
can
also
happen
across
this
whole
flow
or
wizard,
and
this
this
is
the
supportive.
A
Let's
not
take
a
look
at
the
integrated,
so
integrated
is
when
it's
already
surrounded
by
the
the
context,
the
main
context
in
the
supportive
just
to
remind
we
have
the
main
context
which
takes
over
most
of
the
space.
But
we
have
this
aside.
We
have
this
drawer
to
support
you
during
those
actions,
and
here
we
have
this
example
of
the
search
so
or
command
palette.
So
when
you
trigger
it,
you
have
the
ability
to
ask
questions
or
request
things
from
AI.
Maybe
here
we
can
have
a
question
that
the
user
made.
A
We
have
the
answer
here
and
the
user
can
then
even
potentially
click
here
to
initiate
a
conversation
with
the
AI
and
ask
follow-up
questions
about
this
answer.
We
can
also
suggest
other
related
questions
or
things
for
the
user
to
do
based
on
this
question
or
this
input,
and
it's
basically
a
mini
version
of
supportive
right.
So
something
like
this
can
even
be
integrated
in
the
docs
site.
A
If
people
want
really
quick
answers
about
their
questions-
and
it's
also
similar
to
what
the
initial
proof
of
concept
for
the
Tanuki
bot
did,
it
can
transition
to
supportive
as
I
showed.
So
we
can.
You
can
have
a
button
here
that
you
click
and
it
shows
you
the
the
sidebar
with
the
drawer.
So
you
can
continue
your
conversation
with
the
bot
and
can
also
suggest
automations
and
navigation
here
with
some
buttons.
A
If
we
want
another
pattern
that
we're
exploring
is
the
help
so
having
a
call
to
action
in
The
Help
menu
that
triggers
the
ability
for
you
to
ask
help
from
the
AI
using
our
user
documentation
or
even
other
questions
related
to
the
platform,
the
instance
that
you're
running
on.
If
you
want
to
help
debugging
something
or
finding
something,
so
this
here
is
a
trigger
to
transition
to
the
supportive
mode.
A
Here
we
have
another
pattern
which
is
integrating
AI
into
text
areas.
Here
we
see
a
markdown
text
area.
It
potentially
could
also
be
used
in
a
normal
native
browser
native
text
area.
So
here
in
this
markdown
area,
the
problem
is
the
generation
of
content.
So
AI
is
helping
with
generating
content
or
modifying
it.
It
is
interactive.
Sorry,
it's
reactive,
so
you'd
have
to
opt
in
into
using
the
AI
and
it
is
automating
the
creation
of
content.
A
Potentially,
we
can
also
think
about
clicking
a
button
and
moving
into
the
supportive
mode
with
the
the
sidebar
drawer.
In
case
people
want
to
ask
more
questions
about
the
content
to
AI
I'm,
not
sure
if
that
makes
sense
or
not,
but
that's
something
that
we
could
explore,
and
this
pattern
of
the
text
area
could
be
implemented
anywhere,
that
users
need
to
create
long
form
content.
A
A
Instead
of
having
to
deal
with
the
query
Builder
that
we
have
to
filter
issues
list,
for
example,
you
could
type
out
in
natural
language
what
you
would
like
and
the
AI
will
build
the
query
for
you,
and
this
could
potentially
this
pattern
could
potentially
be
applied
to
many
many
inputs
across
the
product.
A
Similarly,
we
can
think
about
transitioning
this
to
supportive.
In
case
you
know,
someone
wants
to
ask
more
questions
about
the
data
that
they're
looking
at
or
How
do
they,
you
know,
build
a
specific
query
to
filter
the
list.
A
Another
potential
pattern
is
the
scoring
of
lists
items.
So
when
we're
listing
many
items
of
a
specific
object
type,
we
may
consider
using
AI
to
score
or
prioritize
or
rank
in
this
case.
It's
the
ranking
pattern
to
rank
these
items
so
ranking
automatically
ranking
vulnerabilities
based
on
a
number
of
different
factors
or
ranking
issues
based
on
the
popularity.
A
So
it
takes
a
lot
of
the
content
and
comments
into
account
and
it
creates
a
more
natural
ranking
system
and
because
all
of
this
is
using
AI
to
score
the
list
items,
we
may
need
to
provide
explanations
in
line.
So
again,
this
is
still
in
the
integrated
scale,
and
here
we
see
a
popover
that
provides
explanation
of
why
this
item
was
ranked
in
a
certain
way.
A
If
you
want
to
ask
questions
or
somehow
manipulate
this
summary
with
the
help
of
the
assistive
of
this
sorry,
the
supportive
mode,
another
potential
pattern
for
summarization
is
having
a
block
of
content
that
is
automatically
generated
by
AI
in
the
description.
So
in
this
case,
this
is
not
in
the
activity.
A
It's
part
of
the
description
of
the
object,
a
merge
requests
are
epic
or
an
issue,
and
using
some
kind
of
notation
and
syntax
in
the
markdown
area,
you
could
create
a
block
which
is
dedicated
for
summaries
and
the
summaries
kept
up
to
date,
or
we
could
add
a
button
here
too,
that
users
can
click
and
it
would
automatically
update
and
keep
this
up
to
date
with
the
AI
content.
This
would
not
be
editable
by
users.
It
is
a
block
of
content
that
is
specific
to
AI
content.
Again,
these
are
all
just
ideas.
A
A
Another
integrated
pattern,
similar
to
the
one
we've
seen
here
in
the
ranking,
is
providing
explanation
in
line
by
selecting
a
piece
of
code
or
a
word.
A
sentence
in
in
you
know
long
form,
prose
content
and
having
the
AI
explain
that
for
us
or
defining
a
word
or
translating
whatever
it
is
and
then
again,
we
also
kind
of
have
very
many
actions.
A
One
of
them
is
to
transition
to
supportive
mode
for
users
to
ask
follow-up
questions
and
finally
charting-
and
this
is
probably
the
only
example-
I
could
come
up
with
for
the
prediction
problem,
and
this
pattern
is
something
that
Libor
already
worked
on
for
the
deployment
frequency
chart,
which
is
part
of
the
Dora
metrics
and
with
a
button
to
show
the
forecasting
data.
It
might
also
be
helpful
for
users
to
recognize
that
this
data
was
generated
by
Ai
and
also
have
the
ability
either
hearing
the
Legends
or
in
the
tooltip.
A
That
appears
when
you
hover
over
the
charts
or
hearing
this
control
have
a
way
to
trigger
the
supportive
mode.
In
case
you
ask
want
to
ask
follow-up
questions
about
the
data
or
the
forecasting,
or
something
like
that,
or
even
to
help
users
interpret
the
data,
and
so
these
are
the
patterns
that
I
was
showing
you
for
the
integrated
and
how
we
can
move
these
integrated
into
supportive
and
then
now
an
example
of
how
form
supportive.
We
could
potentially
look
at
the
focused
scale.
A
So
from
here,
as
I
was
saying
in
the
beginning,
we
may
need
to
create
or
allow
users
to
create
separate
threads
and
navigating
between
those
threads,
depending
on
the
context
that
they
want.
They
are
so
that
they
can
maintain.
You
know,
for
example,
separate
threads
at
the
same
time
for
different
contexts.
You
know,
for
example,
one
regarding
a
vulnerability
another
one
regarding
creating
a
ciml
file
for
another
project,
and
from
here
we
may
need
to
have
a
transition
to
a
more
focused
mode.
A
You
know,
for
example,
if
this
conversation
was
started
in
a
specific
issue,
maybe
you
can
have
a
link
there
to
for
users
to
navigate
back
to
that
issue,
and
this
would
be
an
entry
in
the
personal
work
navigation
with
the
list
of
threads
that
you
have
open,
maybe
allowing
them
users
to
delete
the
specific
threads
or
rename
renaming
them
like
chat,
GPT
does
and
then
also
you
know
the
main
thread
itself,
so
yeah
I
invite
you
to
take
a
closer
look
at
this
figma
file.
A
I'll
share
the
link
with
the
link
for
this
video
and
I'm
open
to
your
comments
and
questions
about
all
of
this.
If
this
makes
sense
to
you,
if
it
doesn't
make
sense,
if
you
recognize
other
potential
patterns
that
are
emerging,
that
we
would
need
to
take
care
of
overall,
what
I'd
like
is
to
try
to
find
people
specific
people
to
design
and
handle
specific
patterns
that
are
identified
here
so
that
we
can
have
bris
and
we
can
properly
explore
these
patterns
and
Abstract
them
for
all
teams
to
work
in
the
prime
example.
A
Right
now
is
supportive,
where
Michael
Lee
is
working
on
this
chat
interface
that
can
be
reused
by
different
teams,
but
we
have
so
many
other
patterns
here
that
can
be,
and
I
would
like
to
have
other
people
handling
them.
I've
already
reached
out
to
Libor
to
talk
about
this
text
area
pattern
and
also
the
summarization,
and
he
will
be
syncing
with
Alex
from
code
review
to
understand
where
are
the
overlaps
and
who
can
be
the
dri
and
handle
the
design
of
these
patterns.
Thank
you.
So
much
for
your
time.