►
From YouTube: GitLab Explain this code
Description
Walkthrough of how the GitLab explain this code based around OpenAI works.
B
Hi
so
finally,
we're
gonna
talk
a
bit
about
the.
C
B
And
that
was
happening
in
the
group
since
yeah.
D
B
Has
has
happened
since
recently
where
we
started
doing
the
thing,
so
the
very
first
thing
that
has
been
taking
taking
by
our
group
was
the
functionality
which
is
called
the
explain
this
code.
So
what
this
functionality
does?
Is
you
select
the
code
question
in
when
you
view
the
blob,
this
question
mark
pops
up
and
then
you
can
you
get
the
explanation
from
the
AI.
We
send
the
the
request
to
open
Ai
and
we
get
the
explanation
of
the
code
telling
what
this
code
does.
B
Unfortunately,
I
figured
out
that
something
something
doesn't
allow
me
to
use
this
functionality
on
gitlab
come:
oh
because
it's
not
gitlopcom.
That's
that
explains
it.
So
I
will
try
to
demonstrate
it
and
hopefully
it
will
work
out.
So
we
just
go
to
I,
don't
know,
let's
go
to
modals
I
have
no
idea
where
I'm
going
so,
let's
just
okay.
B
This
is
complex
enough
for
me
and
I
still
don't
have
the
question
mark
so
probably
probably
I'm
not
for
you
yeah.
It's
not
enabled
for
me,
so
yeah
I'm
I
don't
deserve
that.
So
the
the
functionality
is.
You
select
the
code
then
next
to
the
blind
page.
Oh
okay,
I'm,
actually
my
local
instance
fired
up.
So
okay
I
will
show
you
on
on
the
boring
JavaScript
file.
So
if
you
don't
mind,
we
select
the
file
or
the
part
of
The
Blob.
B
There
is
the
question
mark
persistent
on
the
screen
on
this
line
on
the
first
line
of
your
selection.
So
it
doesn't
matter
how
you
select,
it
will
be
the
whether
top
down
or
down
down
to
top
it
will
be
located
on
the
very
first
line
of
your
selection
and
what
it
does
is
it
creates
this
code.
Explanation:
modal.
B
And
send
a
request
to
to
open
AI
to
get
explanation
for
this
code
once
it's
done,
it
will
return
this.
We
do
not
support
streaming
at
the
moment,
as
you
could
see
in
chargeupt,
but
the
so
we
wait
for
the
whole
thing
to
be
to
be
returned
to
us
and
that's
that's
pretty
much
it
that's
how
how
this
feature
works
in
this
particular
case.
B
Technically,
it
will
tell
you
a
bit
more
about
the
backend
implementation,
but
what
happens
here
is
so
for
those
who
might
have
looked
into
the
openai
modals
in
this
case
we're
employing
the
chat
model
with
and
the
the
reason.
Even
though
this
is
not
chat,
there
isn't
to
use.
The
chat
model
is
quite
simple,
or
there
are
two
reasons.
B
First,
this
will
come
to
the
chat
mode
to
the
chat
mode
eventually
and
I
can
demonstrate
it
in
a
bit
if
there
will
be
interest
for
that,
but
we
are
talking
about
the
current
state,
so
this
is
the
current
state,
so
it
will
grow
to
the
Chart
mode
to
the
chat
mode,
but
the
cost
of
using
chat
model
comparing
to
the
text
model
is
nearly
10
times.
B
Cheaper,
so
we
are
trying
what
we
can
in
order
to
minimize
the
costs
both
for
us,
while
testing
and
for
our
customers
when
they,
when
they
use
this,
we
played
with
the
different
parameters.
At
the
moment
we
have
parameters
set
and
they
are
baked
into
back-endable
limitation,
with
the
The
Only
Exception
The
Prompt
itself
that
we
that
we
sent
to
the
to
the
model
is
still
controlled
by
frontend,
but
that
might
change
soon.
So
the
the
functionality
is
pretty
straightforward,
just
exactly
as
you
might
have
expected
to
get
this
in
charge
GPT.
B
But
the
interesting
thing
is
probably
hidden
in
the
back
end
and
then
I
will
hand
it
off
to
eager
to
tell
tell
us
about
how
it
is
done
in
the
back
end.
Oh
right
before
before
we
do.
This
I
have
to
to
mention
that
there
are
two
back-end
connecting
points
we
have
now.
So
while
we,
while
we
develop
a
feature
like,
for
example,
now
I'm
working
with
the
chat
functionality,
so
the
way
I
develop,
it
I
have
to
figure
it
before
we
bake
in
the
parameters
in
the
back
end.
B
I
have
to
make
sure
that
I
have
to
know
what
parameters
which
have
to
bake
into
the
back
end.
So
what
we
do
now
is
we
keep
all
the
parameters
on
the
front
end.
While
doing
technically
The
Prompt
engineering,
we
figured
out
what
Pro,
how
to
tweak
The
Prompt
in
order
to
get
the
best
possible
result,
and
then
once
it's
done,
we
will
move
that
prompt
to
the
back
end
as
well.
However,
there
are
two
entry
points.
B
The
first
one
is
the
experimentation
rest
API,
that's
the
rust,
and
then
there
is
the
abstraction
layer
which
is
graphql.
So
while
we're
developing
in
our
local
GDK,
we
connect
to
the
experimentation
API.
But
when
we
move
to
the
to
the
to
the
to
production,
we
have
to
switch
to
abstraction
layer
which
is
graphcad,
which
is
a
bit
inconvenient.
But
technically,
as
eager
will
explain,
explain
this
later.
B
Technically
they
both
go
to
the
same
point
and
using
abstraction
layer
locally
is
probably
fine,
but
I
won't
I
won't
steal
this
information
from
Eagle
Sean.
You
have
a
question.
A
B
Sure,
let's
well
let
we
can
do
it
here.
This
is
the
git
lab,
so
right,
so,
let's,
let's
get
here
so
the
whole
all
the
hook
code
is
within
ee
folder,
because
we
have
this
only
for
the
for
the
ultimate
customers
available.
B
What's
actually
in
the
specs,
probably
not
this
not
the
place
I
wanted
to
get
to
now.
Let
me
just
components:
AE
app
assets
right.
So
if
we
go
up
here
in
the
e
app
as
the
JavaScript,
there
is
the
AI
folder,
which
contains
components
this
the
view.
These
are
the
view
components
graphql.
B
This
is
to
connect
to
to
our
abstraction
layer
that
I
talked
about
a
bit
and
then
some
constants
and
details,
so
the
components
are
called
AI
genius,
so
the
main
component
is
called
the
AI
Genie
and
it
can
it
contains
both
this
block
and
this
question
mark
which
is
shown
here.
However,
this
big
component
is
the
separate
one.
It's
called
AI
Genie
chat
and
it's
pretty.
This
component
is
pretty
simple.
There
is
no
magic
Happening
Here.
B
This
is
intentional,
because
this
this
chat
component
has
to
be
reused
by
is
intended
to
be
used
by
several
groups.
For
example,
the
the
search
group
is
working
on,
ask
Tanuki
and
functionality
now
so
the
plan
is,
they
will
use
exactly
the
same
component,
just
pass
in
different
parameters:
different
props
and
then
all
the
magic
with
sending
the
prompts
happens
in
the
root
AI
Genie
component.
In
this
particular
case,
we
have
request
code
explanation,
so
we
select
the
code
here.
B
We
click
the
question
mark
and
that's
where
the
the
request
is
being
sent
to
to.
In
this
particular
case,
we
send
it
to
our
abstraction
layer
to
graphql
implementation
with
just
two
parameters:
suit
messages,
because
we
are
using
the
chat
model
and
child
modal
needs
the
messages
it's
the
array
of
different
messages,
including
the
responses
from
the
server.
B
A
How
do
we
do
we
just
feed
at
the
code
in,
but
how
does
it
know
to
right
to
explain
it.
B
At
the
moment,
The
Prompt
is
in
utils
here
so
generate
prompt.
So.
A
B
To
give
the
model
a
bit
more
context,
we'll
pass
the
file
file
path
so
that
the
model
would
know
whether
it's
that
maybe
it
will
get
some
Clues
from
the
file
structure
or
from
the
location
of
the
file.
So
we
we
ask
explanation
for
this
file
path
in
human,
understandable
language
presented,
markdown
file
format.
Why
markdown
format?
Because
it's
just
easier
to
output
on
the
screen?
B
That's
so
we
fine-tune
in
the
beginning,
I,
I
played
and
Technology
will
be
output
in
HTML
right,
so
that
was
natural
for
me
to
in
the
first
place,
to
to
construct
the
The
Prompt
so
that
it
would
return
me
the
HTML
right
away.
However,
all
the
HTML
tags
will
count
towards
the
token
limitation,
so
users
and
us,
including
will
have
to
pay
for
all
those
HTML
markup
elements
to
be
passed
back
and
forth.
B
So,
to
avoid
this,
we
just
request
markdown
file
format
and
output,
HTML
ourselves
no
big
deal,
and
then
this
is
already
like
part
of
the
fine-tuning
The
Prompt,
because
openai
is
really
is
really
keen
on
eating
up
the
tokens
by
duplicating
the
code
that
you
sent
it
and
providing
the
title
coming
up
with
the
title.
B
So
we
we
explicitly
say
do
not
do
that
and
then
we
just
pass
the
selected
code.
That's
it
so
with
this.
Well,
this
prompt
everything
starts
here
and.
C
B
You
yeah
you're,
welcome
so
with
with
that.
Let's
get
to
eager
and
eager
will
tell
us
what
does
he
do
with
this
with
this
prompt
in
graphql,
so
I'll
stop
sharing.
D
Yeah,
let
me
share
my
screen.
D
Probably
I
won't
dip
a
dive
into
a
lot
of
details,
but
I've
I've
sent
three
merch
requests
from
them
from
the
obstruction
layer,
yeah
it's
and
I
AI
enablement
team
yeah.
We
have
a
separate
team
which
actually
works
on
AI
Integrations,
and
the
team
here
does
a
great
job
to
create
some
reusable
components.
D
So
we
could
use
those
components
to
build
endpoints
that
are
specific
to
our
particular
use
case
yeah
and
actually
the
graphql,
endpoint
or
request,
looks,
looks
like
this
yeah,
it's
mutation,
which
is
called
AI
action
and
it's
and
it
accepts
woman.
D
So
I'm
here
yeah
yeah
and
this
mutation,
it's
actually
accepts
parameters,
arguments
related
to
our
specific
specific
endpoint.
So
if
we
quickly
have
a
look
at
the
implementation.
D
Ai
action
yeah,
it
looks
like
this.
This
is
an
implementation
of
the
mutation
and
it
dynamically
defines
arguments
Arguments
for
a
particular
functionality,
for
example,
this
method
it
returns,
like
other
summarize
notes
or
explain
code,
and
those
methods
are
dynamically
defined
here.
The
input
type
is
also
dynamically
defined.
So
let
me
quickly
show
you
the
implementation
for
explain
code.
D
It
changes
the
okay,
sorry
I
forgot
to
mention,
then
this
mutation
actually
being
executed,
and
this
service
is
actually
the
one
that
holds
the
our
specific
service.
So
in
order
to
implement
new
endpoint
new
functionality,
we
start
from
this
service
and
Implement
and
specify
our
specific
specific
service
that
will
like
do
all
the
logic
or
maybe
sorry
I
should
have
started
from
from
the
whole
beginning
yeah.
Currently
we
have
in
order
to
implement
experiment
with
open
AI
features.
D
It
just
performs
an
open,
API
request,
so
we
have
a
simple
rest:
API
endpoint
and
anyone
can
just
if
you
they
want
to
create
a
feature.
They
can
just
experiment
with
the
feature
they
can
just
use
this
endpoint.
The
problems
with
this
with
using
this
endpoint
for
for
everyone
yeah
in
production,
is
that
openai
API
request
can
take
a
lot
of
time,
yeah
even
more
than
60
seconds,
and
our
Puma
timeout
is
60
seconds.
So
sometimes
so
it's
not
recommended
to
put
such
loan
loan
HTTP
requests
in
our
synchronous
to
perform
the
synchronously.
D
D
Naturally
you
want
to
perform
it
in
sidekick,
but
if
the
unit
result
from
this
heavy
operation,
you
need
to
communicate
it
somehow
to
like
front-end
to
the
user.
You
use
graphql
subscriptions
in
order
to
communicate
this
information
to
the
user,
so
it's
something
that
probably
should
have
been
used
for
update
for
sync
Fork
functionality,
yeah,
because
the
operation
is
really
really
slow,
really
long
and
performed
in
a
sidekick
request.
Maybe
we
will
refactor
it
in
the
future,
so
this
open
AI
request
they
are
implemented.
This
way
said
kick.
D
Job
is
scheduled,
static,
job,
say
a
safe,
kick
job,
performs
an
open,
AI,
API
request
and
the
result
of
this
request
is
communicated
to
is
communicated
to
front
end
using
graphql
subscription,
which
is
basically
a
websocket.
So
maybe
let
me
first
show
you
an
example.
For
example,
we
have
this
file.
We
want
an
explanation
for
this
line
when
we,
when
we
click
on
it,
graphql
request
is
performed
yeah
and
it.
D
D
D
E
D
Slow
but
actually
we
received
the
response,
and
now
we
have
this
result,
which
which
is
then
rendered
here,
and
it
seems
that
graphql
subscription
here
instantly
unsubscribes
after
the
operation
was
performed
so
I
have
the
an
obsolete
information.
I
thought
that
websockets
are
discouraged
for
gitlab
rails
that
are
not
skilled,
but
it
seems
for
graphical
graphql
subscriptions.
D
We
started
like
using
them
quite
actively,
for
example,
for
merge
requests
page
or
some
other
page
that
updated
dynamically
we're
using
graphql
subscriptions,
and
maybe
this
is
the
reason
why
it's
cheaper
than
like
usual
usual
web
sockets
channels,
because
it's
just
subscribes,
waits
for
feedback
and
then
unsubscribes.
A
A
A
D
Just
because
it
performs
a
request
to
open
Ai
and
yeah.
If
you
try
to
play
with
open
AI
chat
GPT,
it
can
provide
some
impressive
results.
So
it's
okay,
even
a
complex,
regular
Expressions
should
be
handled,
should
be
handled
quite
well.
Great.
A
Looks
great
yeah,
sorry
and
I'm
just
another
well
while
I'm
speaking
I
guess,
but
this
section
where
it
says:
you're,
not
a
copy.
Any
part
of
this
output,
blah
blah
is
this
for
staff
only
or
for
anyone.
This
one
yeah
exactly
yeah.
B
Yeah,
so
technically,
this
is
at
the
moment.
This
message
is:
is
available
to
everybody
who
has
access
to
this
functionality.
How?
But,
however,
since
we
control
who
has
access
to
this
functionality-
and
we,
we
didn't
really
do
anything
with
restricting
this,
because
I
wouldn't
expect
this
to
to
sit
here
for
for
really
long
like
it's
it's
to
prevent
this
early
day
sort
of
misuse
according
to
the
legal
obligations
we
have
so
it's
more
to
the
to
the
gitlab
team
members,
but
it's
it
is
supposed
to
be
gone
later.
B
Feature?
Yes,
yes,.
B
The
has
the
list
of
all
the
all
the
voodoo
magic
that
has
to
happen
for
one
to
be
lucky
to
get
this
functionality,
because
it's
just
like
they
list
like
I,
don't
have
it
on
gitlab
I.
Don't
know
why
it's
just
like
my
the
flag
for
me
is
not
enabled
apparently,
but
that's,
that's,
really
challenging
to
get
access
to
this
functionality.
A
B
So
eager
did
tremendous
job
in
the
back
end.
Then
there
was
the
front-end
support
and
like
a
lot
of
a
lot
of
things
to
refine,
but
the
the
Prototype
has
been
delivered
over
the
course
of
a
weekend.
B
C
Yeah,
maybe
I
missed
it,
but
do
we
cash
any
of
these
in
any
way
to
prevent
like
over,
asking
anything
to
check
GPD
like
do
we
cash
like
the
with
the
code
that
we
passed
through
as
key,
or
how
do
we
do
this.
D
So
currently
yeah
the
quick
answer
is
no
yeah,
and
but
we
like
a
plan
to
there,
are
plans,
for
example,
storing
this
responses
here
for
particular
lines.
There
are
plans
for
rate
limiting
there.
These
requests,
so
there
are
some
like
protective
techniques
that
are
planned
to
be
planned
to
be
like
implemented,
but
we
don't
have
it
in
the
moment
and
yeah.
In
the
quick
note
regarding
the
previous
yeah
previous
question
about
how
long
did
it
take
you,
Denise
yeah
did
a
tremendous
job.
Let
me
rephrase
you
for
creating
a
POC.
D
He
did
it
really
really
fast,
but
we
did
we
needed
extra
time
to
do
it
like
right,
yeah,
with
this
obstruction
layer
with
a
backhand
and
and
yeah,
and
it
was
here
released
on
Friday.
B
It's
been
literally
the
whole
the
whole.
The
whole
gitlab
team
working
with
AI,
was
working
in
parallel,
I
I
think
when
we
started
in
the
group
there
was
even
no
experimentation
layer
yet
and
then,
like
as
sort
of
as
different
parts
of
the
puzzle
were
getting
together.
We
as
a
group
were
delivering
different
bits
and
pieces
and
then
technically
it
took
two
weeks
to
get
to
production
yeah.
It
was
it
no
actually,
it
was
I
mean
less
than
that.
We
we,
we
moved
it
to
to
production
yeah.
D
D
E
I
just
wanted
to
say
it's
like
a
really
great
job.
Thank
you
for
working
on
that
looks
pretty
cool
I
just
wanted
to
add
some
like
of
the
things
that
I
discovered
myself
about
it.
It's
first
of
all
was
about
this
using
the
websockets
I
was
not
aware
that
we
actually
used
them
like
anywhere.
So
it's
kind
of
gives
an
ideas
about
some
possible
optimizations.
We
can
do.
C
E
Thing
it's
about
some
kind
of
like
secure
if
you
reliability,
point
of
view
which
Patrick
partially
covered
like
so
we
create
like
one
sidekick
worker,
for
each
request
at
the
moment.
B
D
Yeah
yeah,
we
we
don't
have
the
limitation,
but
like
the
only
limitation
that
we
have
is
the
like
explanation
that
it's
like
first
iteration
yeah
and
also
like
the
feature
is
limited.
It's
not
like
publicly
open
and
like
it's
yeah
being
readily
rolled
out
and
and
only
for
ultimate
users,
so
yeah
there
are
plans
to
introduce
like
great
limiters
but
yeah.
It
works
like
this
in
the
mode.
B
General
there's
there's
a
lot
of
work
has
to
be
done
probably
before
we
actually
can
claim
that
we
we
can
go
live
with
this
for
for
a
broader
audience
at
the
moment,
it's
with
the
control
group,
but
before
we
introduce
somebody
who
who
might
want
to
just
go
on
and
abuse
it
through
full
size.
That's
there
are
a
lot
of
things
that
have
to
be
done.
Probably.
E
D
That's
a
great
question
yeah.
Currently,
this
access
token,
it's
instance,
white
yeah,
and
it's
like
specified
in
in
settings.
So
the
future
is.
This
is
the
reason
why
the
feature
is
for
one
of
the
reasons
why
the
feature
is
for
ultimate
users
only.
So
it's
not
like,
because
yeah
with
this
current
endpoint,
it's
actually
like
yeah
free
chat,
GPT
like
endpoint,
for
anyone
to
use.
So
this
is
why
it's
like
so
limited
and
yeah,
and
mostly
is
once
done
for
the
first
iteration
I
would
say
you
had
to
move
quickly.
D
We
send,
like
the
whole
prompt
the
we.
We
send
the
selected
text,
but-
and
this
selected
text
is
also
limited
on
backend,
because
it's
not
recommended
to
schedule
like
large
portions
of
text
inside
kick
so
it's
okay
for
the
first
iteration,
but
probably
we
should
consider
like
send
in
maybe
lines
so.
Instead
yeah
lines,
numbers
and
calculate
the
text
on
back
end
like
yeah,
there's
room
for
improvement,
but.
D
A
D
Examples
to
to
follow
I
think
so
and
yeah.
Thank
you
very
much
for
listening.
Thank
you
yeah!
Then
it's
maybe
you
want
to
wrap
up
or
maybe
yeah.
B
No,
it's
are
we.
There
are
a
lot
of
a
lot
of
ideas
that
that
are
in
the
air
at
the
moment,
and
there
will
be
a
mission.
There
will
be
plenty
of
work
because,
right
now
we
like
we
try
to
split
the
work
in
the
front
end
among
three
engineers
and
then
offload
it
all
to
eager
for
back
and
support
it's
just
it's
like
we.
B
We
have
several
entry
points
and
then
everything
goes
to
you
get
it
so,
but
I
think
at
some
point
we
will
have
to
because
the
because
the
plans
are
pretty
ambitious,
we
will
have
to
we'll
have
to
scale
the
current
approach
somehow,
but
yeah
everybody's
welcome
to
contribute.
Of
course
it's
and
it's
a
lot
of
fun.
It's
it's
very,
very
rewarding
to
see
this
happening.