►
From YouTube: AI Experiment - Error Tracking Fixes
Description
An experiment (proof of concept) using GitLab AI chat/completions endpoints to suggest fixes for some errors.
PoC project - https://gitlab.com/gitlab-org/opstrace/sandbox/poc-ai-errors/
Tracking issue - https://gitlab.com/gitlab-org/opstrace/opstrace/-/issues/2160
A
Hello,
my
name
is
Joe
Shaw
I'm
from
the
reservability
group
at
gitlab
and
I'm,
just
going
to
do
a
quick
demo
of
a
proof
of
concept
of
AI
integration,
with
error,
tracking
that
we
develop
in
our
group,
and
this
is
centrally
compatible
errors
that
we
store
in
our
own
back
end.
A
You
can
view
so
this
is
the
project
I've
been
working
on
group
of
concept,
AI
errors
bit
of
an
overview
there,
how
we,
the
context,
be
how
we
generate
a
large
language
model
results
for
error,
tracking
items-
and
this
will
read
new
stuff
in
here.
If
you
want
to
have
a
look
at
it,
I'll
put
a
link
to
this
project
underneath
the
video
and,
if
you're,
a
git,
wrap
team
member.
You
can
run
this
project
yourself.
A
A
So
this
is
one
these
are
all
JavaScript
related
because
we
get
a
nice
stack
Trace
you
can
see-
and
this
is
our
they
were
tracking
UI
here,
and
this
first
error
is
just
generated
as
a
simple
solution,
and
it
should
be
able
to
solve
this
quite
easily
and
you
can
probably
see
the
error
already
there.
A
It's
length
is
not
defined
and
the
length
function
doesn't
exist
and
you
can
see
a
hint
above
here.
Numbers.Length
is
what
we
want,
but
we're
actually
calling
an
undefined
function
let's
in
this
product
function,
so
that's
just
not
going
to
work.
The
second
error
gets
a
little
bit
more
complicated
here,
I'm
trying
to
sort
of
simulate
a
document,
as
you
would
have
in
a
browser
which
is
a
bit
different
to
us,
typical
node.js
error
yeah.
This
is
a
type
error
saying
this.
A
That
show
alert
is
not
the
function,
and
you
can
see
here
where
the
error
is,
and
the
solution
here
is
actually
dysfunction.
Other
scope
to
this
function.
This
is
one
of
those
awkward
JavaScript
errors
where
the
this
binding
isn't
scoped
to
hear
it
scope
to
the
caller,
and
the
solution
is
simply
just
to
delete
this
variable.
A
The
reference
to
this
and
just
call
show
alert
directly
and
that
would
just
work,
and
now
we
get
more
complicated.
So
this
is
a
type
error.
Rl
is
not
iterable
and
here's
the
context
of
the
code
and
what
we're
doing
here
is
having
a
function
that
reads:
a
CSV
header.
You
should
just
read
the
first
line
from
this
file
and
it's
trying
to
do
it
in
an
async
way
and
only
stream
out
one
line.
A
The
tricky
bit
here.
It's
using
async
await
which
can
cause
some
problems
and
The
Simple
Solution
here
is.
We
just
need
an
await
in
this
part
of
the
code.
Read
lines
returns,
an
async
promise
like
interface,
and
you
can
see
actually
I've
got
the
documentation
here.
The
file
handle
read
lines
and
actually
in
the
documentation.
It
shows
you
exactly
what
what
we're
trying
to
do
here,
const
for
await
cons,
line
that
read
lines,
and
that
would
work.
A
So
if,
if
the
llm
digested
this
documentation,
it
should
be
able
to
reason
about
that
now,
just
showing
you
the
issue
here,
where
we've
collected
the
results,
we've
got
a
little
intro,
showing
you
how
the
model
suggestions
are
created,
and
this
is
the
first
go
at
it.
A
A
That's
all
we're
sending
the
first
iteration
we're
using
the
chat,
completions
API,
which
is
in
the
experimental
apis
at
gitlab
that
we
can
use,
which
is
calling
the
open,
AI
chat,
completion,
API
and
it
uses
the
GPT
3.5
turbo
model,
which
is
the
kind
of
default
model
for
any
experiments
before
you
pick
a
more
expensive
one.
To
do
something
more
explicit
now
taking
this
quite
naive
approach
here
are
the
results
so
for
the
first
one,
the
following
code:
snippet
has
the
error
blah.
A
A
Now
we
get
you
to
show
alert,
is
not
a
function
and
there's
a
description
of
it,
and
in
this
case
the
output
gets
a
little
bit
confused.
The
explanation
is
good,
so
it's
explaining
the
issues
with
this
and
it's
explaining
the
issue
that
show
alert
is
not
defined
in
the
context
of
this,
but
the
actual
fix
does
not
work,
even
though
it's
suggesting
using
a
Lambda
function
here
which
changes
the
this
binding.
A
That's
not
sure
that
it's
still
not
Bound
in
the
same
place,
and
then
it
starts
writing
more
of
the
code
that
we
want
here.
So
it
started
to
sort
of
generate
more
content,
and
this
actually
introduces
a
book
where
the
Callback
function
is
immediately
called
after
the
ad
event
listener,
which
would
just
end
up
in
a
end
up
in
a
loop.
A
So
that's
pretty
much
broken
that
code
more
than
we
would
like,
and
then
here's
the
tricky
one
RL
is
not
iterable
with
the
AC
Point
bug
and
this
when
we
get
the
the
output,
it's
saying
for
the
file
that
weed
lines
There's
an
opportunity
or
iterable
object,
which
is
technically
true,
but
it
tries
to
do
something
that
doesn't
really
quite
work.
A
The
file
handle
class
does
not
contain
a
Redline
method
which
doesn't
exist
in
there
and
it
doesn't
have
an
async
method
of
this
nature.
So
that's
just
doesn't
work
at
all
and
I
find
for
this
one.
When
you
run
this
suggestion,
it
generates
completely
different
results
each
time,
so
sometimes
it
rewrites
it
completely
with
promise
like
syntax,
where
it's
doing
on
events
to
receive
data,
and
it
generates
completely,
doesn't
think
so.
A
What
we
then
go
on
to
do
is
reading
now
in
more
depth
the
best
practices
doc
and
then
set
up
a
persona
as
part
of
the
messages
that
we
send
and
we
can
send
this
initial
system
message,
which
kind
of
sets
the
scene
of
of
what
we
want
the
model
to
do
and
we're
much
more
detailed
and
strict
in
comparison
to
the
previous
attempt.
A
A
A
The
code
must
still
be
valid
with
me
being
explicit
about
that.
Do
not
edit
the
code
beyond
the
boundaries
of
the
original
code,
snippet
again
being
explicit
about
that.
We
weren't
explicit
about
any
of
these
things
before
and
then
the
user
message
which
is
sent
with
this
system
message
is
just
as
described
the
block
of
code.
A
Title
of
the
event
and
the
line
number
the
results
again,
the
length
reference
area
length
is
absolutely
fine
and
it
does
what
we
ask
it
to
we
actually
edit
the
four
block
of
code
here,
giving
us
an
output.
It
fixes
it.
No,
it
doesn't
go
beyond
the
boundaries
of
that.
It's
not
generating
more
code.
It's
staying
within
that
snippet
of
code
that
we've
given
it.
A
This
dot
show
alert.
It
actually
does
fix
this
now,
and
it
explains
this
more
clearly.
It
moves
needs
to
be
made
accessible
from
the
enhance
function.
A
Sorry
takes
all
this
off
and
it
even
puts
a
comment
in
to
show
why
it's
done
that.
The
only
downside
is,
if
you
look
at
the
actual
error
here
in
the
stack
tracing
collapse.
A
If
you
look
at
this
first
frame
of
the
statist
that
we
send
we're
only
going
up
to
line
15
here,
the
previous
stack
Trace,
you
can
see
further
more
of
a
file
that
we
don't
send
that
we
only
send
the
top
one,
because
we
only
want
to
send
the
most
relevant
bit
of
the
code,
and
you
can
see
here
it's
that
it
started
to
actually
Auto
film
parts
of
this.
Now
it's
not
fully
correct
and
not
not
exactly
what
you
want.
A
So
you
know
if
you're
copying
this
code
directly
you're
going
to
have
maybe
have
some
problems.
There
type
error
are
whether
it's
not
itchable
the
the
tricky
Corner
case,
one
again
it.
It
follows
our
directions.
You
know
giving
an
explanation
and
then
providing
code,
but
again
it
generates
code.
It
just
doesn't
work
get
reader
is
a
function
that
does
not
exist
if
it
did
exist
and
you
could
do
let
done
value
await
read,
you
could
just
return
this
value
here.
A
Instead,
it's
now
writing
a
an
infinite
Loop
returning
the
value
and
then
awaiting
a
read
again,
so
that
doesn't
really
make
sense,
but
in
this
case
so
it's
better
than
the
last
attempt
and
I
would
give
it
more
over
65
60
to
70
score
I
think
you
know
that
last
example
I'm
giving
it
is,
is
being
rather
difficult
and
a
few
other
few
conclusions
there
and
for
the
sake
of
completion.
I
will
just
run
this
for
you,
it's
not
particularly
exciting,
because
it's
it's
not
built
into
your
lab.
A
It's
not
got
a
nice
UI.
It
is
using
all
the
gitlab
apis,
though,
so
we're
not
cheating
in
any
way
here
we're
pulling
errors
directly
from
from
the
back
end
as
they
exist,
and
you
can
see
here.
This
is
it's
sending
the
prompt
and
it's
got
a
response.
So
we
can
look
at
that
response,
and
this
is
that
that
type
rl1
read
lines
thing
see:
it's
generated
a
very
different
output
there,
that's
what
we
had
previously
so
like
I
say
that
one
changes
quite
a
bit
as
part
of
this
code
as
well.
A
We've
got
showing
how
many
tokens
we've
used
in
the
request,
then
the
next
one
the
length
one.
We
know
that
this
one
will
work.
Yep
it
works.
Fine
I
can
I,
can
scan
that
and
see
that
that's
worked
fine
and
then
this
one
is
the
this
context.
One
and
the
response
is
again
a
bit
different.
What
we
previously
had,
in
fact,
it's
quite
different
and
may
or
may
not
work,
and
maybe
a
bit
beyond
the
scope
of
what
we
want.
So
this
is
an
example
of
you.
A
You
never
really
know
whether
you're
going
to
get
consistent
outputs
from
these
models,
and
there
may
be
other
parameters
we
can
change
to
pair
this
down.
So
just
the
temperature
parameter,
which
should
control
how
random
these
results
are
and
I've
experimented
with
that
and
not
really
had
any
better
results
so
we'll
see
anyway.
That's
it
for
this
demonstration.