►
From YouTube: UX AI Design Sync 23 09 06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh,
this
is
the
September,
6th
and
or
7th
what
with
the
AI
ux
team
call
chat,
discussion,
Maybe.
B
Yes,
so
I
have
the
first
and
only
agenda
item,
which
kind
of
goes
into
just
a
general
update
too
about
what's
happening
with
the
explain
this
vulnerability
at
the
threat.
Insights
AI
feature
so
earlier
today,
our
our
lead
back
in
engineer,
working
on
explain
this
vulnerability
meant
to
host
a
meeting.
Titled
explain
this
vulnerability.
Ga
security
discussion
with
the
internal
security
team,
but
accidentally
invited
the
whole
company.
B
So
this
resulted
in
people
can
fit
on
a
zoom
page
roughly
like
I,
don't
know
because
there
were
three
pages
of
people,
so
it
was
huge.
I
said
30
plus
roles.
I
didn't
mean
that
there
were
at
least
50
at
least
50
people
on
the
call.
A
B
A
range
with
roles
ranging
from
customer
success
to
other
stages,
PMS
Engineers,
legal
and
more
asking
questions
about
the
feature.
So
it
was,
it
was
interesting
to
say
the
least,
and
it
it
kind
of
reminded
me
of
what
being
in
other
companies
is,
like
honestly,
I
think
it
was
like
a
happy
accident
because
it
really
reminded
me
like,
at
my
last
company,
being
in
a
room
with
stakeholders
or
other
people
throughout
the
organization.
Asking
really
really
tough
questions
about
like
what
is
this
thing
and
how
do
we
pitch
it?
B
And
what
is
the
value
here?
And
what
about
the
concerns
and
how
are
y'all,
addressing
that
I
mean
from
sales
and
marketing,
and
it
was
really
really
interesting.
So
I
I
linked
the
agenda.
B
The
agenda
for
that
meeting
in
the
agenda
for
this
meeting
so
just
to
cover
like
some
of
the
questions
that
came
up
there,
Roger
Wu
who's
a
senior
product
manager
for
self-managed.
So
he
says
my
team
has
been
working
on
gitlab
cloud
connector.
How
do
we
intend
to
facilitate
explain
this
moment
self-managed
instances?
One
of
the
key
considerations
is
that
self-managed
customers
are
generally
closed
off
and
more
sensitive
and
without
going
into
like
all
of
the
answers,
the
next
question:
what
are
the
biggest
value
drivers
to
this
feature?
B
A
B
B
A
An
engineer
posted
one
yesterday
or
two
days
ago
on
on
I
think
releasing
it
to
Beta
and
how
to
set
it
up
and
how
to
use
it
and
all
that
stuff.
B
A
B
B
This
is
what
I
assume
that
that
person
was
speaking
about,
but
I'm
curious
to
see
yours,
oh
the
first
blog
post
can
be
seen
here,
so
I
guess
there's
unless
they
were
referring
to
that
one
gate:
Labs
Services
vulnerabilities
that
contain
relevant
information.
However,
more
users
aren't
sure
where
to
start
it
takes
time
to
research
and
synthesize.
B
So
there
were
a
lot
of
really
really
valuable
questions
in
here.
Are
we
storing
which
vulnerabilities
are
being
detected
by
git
lab?
Is
there
a
history
of
any
kind
and
then
like
a
list
of
considerations,
so
my
contributions
on
this
meeting
was
to.
B
Show
everyone
that
we're
going
through
the
steps
to
validate
the
quality
validate
the
ux
that
we're
not
just
carelessly
releasing
these
into
experiment
in
beta
and
and
GA
that
we're
that
we're
doing
ongoing
research
that
we're
setting
success
criteria.
So
I
am
very
grateful
to
the
AI
teams
for
how
fast
we've
moved
in
creating
these
kind
of
guard
rails.
B
I
know
a
lot
of
it
is
just
up
to
the
teams
to
set
their
own
success
criteria,
but
I
I
did
feel
well
prepared
going
into
that,
because
they
were
very
fair
questions
that
the
people
had,
especially
for
roles
that
are
customer
facing
like
these
are.
These
are
questions
that
are
going
to
come
up
and
I
I
want
to
make
sure
that
they
have
the
answers
that
they're
looking
for.
B
So
my
contribution
was
to
link
to
the
research
readout
that
I
recently
recorded
where
I
show
some
customers.
Speaking
about
you,
know,
recording
their
feedback
on
the
explain
this
vulnerability
feature
and
some
other
concerns
and
some
of
their
hopes.
B
So
hopefully
people
were
able
to
watch
that
video
and
it
answered
some
of
their
questions.
I
also
linked
to
the
dovetail
report
and
the
issue,
the
research
issue,
so
they
could
dig
into
it
a
little
bit
more
and
then
I
also
posted
links
to
the
two
upcoming
studies
we
have
for
GA,
which
I
can
also
talk
about
now
so
going
into
or
looking
at
going
towards.
Ga
there's
there's
two
things
that
were
that
we're
talking
about
doing
in
the
next
Milestone,
which
is
16
5,
yeah,
16-5,.
B
So
the
way
that
this
is
shaping
out
is
it's
kind
of
a
two-parter,
so
Michael
Oliver
has
this
issue
still
in
the
draft.
State
should
be
noted,
but
solution
validation,
measure
the
responses
provided
by
explain
this
vulnerability
to
assess
for
any
positive
or
negative
feedback
from
the
user.
So
this
one
is
really
about
measuring
the
quality
of
the
response
and
there's
an
ongoing
discussion
down
here
with
Alana
I
I
pointed
out
that
we
need
to
set
success
criteria
for
these
results
because
it
wasn't
really
mentioned
up
until
now.
B
Alana
says
thank
you
when
you
were
talking
to
customers
during
your
most
recent
moderated
study.
Was
there
a
percentage
of
correct
responses?
Customers
are
looking
for
so
I'm
in
the
middle
of
crafting
this
response,
but
basically
no
I
mean
it's
well.
A
B
Maybe
arbitrarily
setting
this
well
not
arbitrarily
when
I
brought
I
brought
this
up
with
internal
security,
engineers
and
I
think
this
was
probably
the
goal
of
the
meeting
that
Gregory
intended
to
have
today
with
the
security
team.
B
But
when
I
spoke
with
them,
you
know,
I
was
hearing
I
would
expect,
at
least
you
know,
90
accuracy,
so
I
think
she's
from
that
she's
targeting
85
to
90
percent.
So
then,
during
our
AI
sync,
we
have
a
weekly
AI
security.
Sync
that
happens
before
this
meeting
so
I.
B
There
was
a
discussion
about
how
exactly
do
we
measure
this
and
I
pinged
Michael
Oliver
afterwards,
and
he
is
thinking
a
scale
of
one
to
ten
words
most
useful
or
a
likert
scale.
Question
like
how
useful
was
this
to
you.
B
He
says
personally,
I
would
lean
towards
the
one
through
ten
scale,
because
it's
a
little
easier
to
measure.
Statistically,
you
can
say
we
want
less
than
five
to
ten
percent
of
participants
to
score
1.3
and
at
least
15.
This
is
where
I.
This
is
not
my
strength,
the
the
quantitative
side,
I,
don't
know
how
here.
B
A
B
Yeah-
and
you
can
see
she
did
chime
in
in
this
issue
a
week
ago
in
the
future,
I
would
love
to
work
on
more
concrete
guidance,
so
each
team
does
not
have
to
find
their
own
system
to
determine
feature
maturity.
B
The
quality
of
the
answer
should
absolutely
be
considered
when
maturing
to
GA,
as
that
is
the
feature
so
I
would
put
90
of
my
focus
there
on
10
on
usability
and
UI,
and
then
she
mentioned
Spotify
discover
weekly
playlists.
If
nine
out
of
ten
times
Spotify
gave
me
a
song,
I
didn't
like
I'd,
probably
stop
using
it.
At
the
same
time,
if
I
was
getting
this
bad
experience,
Spotify
was
also
pushing
this
features.
Ga
and
shouting
about
it
in
the
product.
I'd
be
really
confused,
very
valid
point.
B
B
So
this
is
scope
for
16.5,
so
I'll
be
working
on
this.
Specifically,
you
know
what
is
the?
What
are
the
tasks
that
we
feel
like
should
be
included
in
this
and
then
I
guess
we'll
be
using
a
design
prototype
for
this,
but
of
course
like
it,
it's
also
going
to
factor
in
the
quality
of
the
response,
but
I
guess
we
can
I
can
just
put
a
copy
paste,
a
response
that
we
feel.
B
A
B
A
You
could
technically
ux
will
be
tested
yeah.
You
could
technically
do
them
both
just
in
separate
moments
with
the
same
participant
you
know.
First
half
is
I
want
to
do
a
task-based
study
with
you.
Second
half
is
I
want
to
talk
to
you
about
these
questions.
You
know
you
could
do
that
too.
It's
moderated,
but
you
could
certainly
combine
them.
B
Well,
I
think
I
mean
if
we're
saying
they
both
strictly
have
to
be
done
with
external
participants.
I
think
I
mean
I
asked
Alana
here.
B
B
What
are
the
variables
that
influence
if
it's
needed
I
think
it's
up
up
to
us
to
determine
if
we
want
to
include
external
users
here,
although
I'm
not
sure
if
a
second
person
could
say
for
sure
whether
or
not
the
proposed
solution
worked,
I've
heard
some
of
our
internal
SEC
experts
identify
when
a
proposed
fix
is
not
a
good
solution,
but
in
my
research
I
got
the
impression
they
would
try
the
solution
or
have
their
devs.
Try
the
solution
and
then
would
want
a
confirmation,
the
subsequent
Mr
that
it's
not
detected
again.
B
A
That's
a
good
point,
too,
that
people
who
have
evaluated
there's
two
different
sort
of
participants
I
think
right.
One
is
going
to
know
that
it's
a
good
or
a
bad
answer
right
and
that
might
not
be
the
person.
So
it
also
depends
on
what
we're
talking
about
right
now.
This
only
lives
in
the
vulnerability
report.
But
if
we
were
talking
about
this
on
an
Mr
Page,
then
that's
a
different
story
because
you're
talking
about
a
developer,
who
may
not
know
if
it's
a
good
or
bad
answer
right,
yeah.
B
A
B
B
Or
the
success
yeah
because,
as
I
said
here,
I
mean
if
people
aren't
familiar
with
that
type
of
vulnerability
or
the
solution
being
proposed,
they
may
just
want
to
take
the
solution
that
AI
is
offering
them.
You
know,
spin
up
an
MR
mer,
merge
it
and
then
hope
that
it
works
but
see
if
it
comes
up
in
another
in
a
subsequent
Mr
that'll,
be
their
determination.
B
Follower
I'm
sorry,
this
is
an
important
call.
I
need
to
grab
this
okay
go
ahead.
Can
we
can
I
ping
you
after
and
we
can
continue.