►
Description
First ever weekly update for AI assist, week 12.
A
A
So
the
last
couple
of
weeks,
I've
been
working
on
onboarding,
going
through
all
of
the
tasks
getting
to
know,
gitlab
the
way
we
work
and
also
hear
about
ideas
on
ai
and
cs
and
figure
out
what
we're
going
to
actually
build.
So
most
of
this
week,
I've
been
spending
time
on
drafting
the
vision
for
ai
assist,
which
I
want
to
elaborate
upon
during
this
video
call,
and
please
give
me
feedback
on
this,
because
it's
a
very
broad
subject
and
we
can
go
different
ways,
different
approaches
and
I
want
to
iterate
as
much
as
possible.
A
A
A
One
of
the
sub
domains
that
I've
been
working
in
on
for
the
last
couple
of
years
is
machine
learning.
I
think
machine
learning
and
data
science
have
been
going
through
the
same
hype
effect
as
ai.
What
used
to
be
called
data
analytics
became
data
science.
It
then
began
machine
learning
and
then
machine
learning
became
ai,
and
this
changing
of
the
terminology
is
is
a
bit
confusing,
because
machine
learning
is
part
of
ai
and
data.
Science
is
not
per
se
machine
learning
or
ai.
A
It
can
also
be
advanced
analytics,
but
also
deep
learning,
neural
networks,
all
subsets
of
ai.
In
my
opinion,
the
most
important
aspect
of
ai
is
the
self-learning
aspect
and
the
continuous
learning.
So
if
we
strive
for
a
machine
learning
model
with
a
feedback
loop,
we
can
do
something.
That's
called
continuous
learning,
so
our
model
will
continuously
learn
from
the
feedback
that
is
provided.
So
let's
say
our
model
predicts
a
classification
of
group.
A
and
a
user
gives
feedback
that
it
shouldn't
be
group
a
it
should
be
group
b.
A
We
can
incorporate
that
feedback
into
our
model
to
make
it
learn
from
the
mistakes
that
it
has
made,
because
everything
changes,
constantly
data
might
change.
Behavior
might
change,
you
need
to
measure
this
drift
and
continuously
adjust
for
it.
So
the
other
thing
is
that
ai
is
very
broad
and
we
want
to
make
it
more
specific.
A
So,
instead
of
just
doing
your
let's
say
a
b
testing
or
your
feature,
you
also
need
to
do
a
b
testing
on
your
model,
because
how
else
would
you
know
if
your
model
is
actually
performing
well?
So
I
think
a
lot
of
features
in
the
future
will
incorporating
aspects
of
machine
learning
or
ai,
and
it
will
become
more
and
more
common
to
use
these
features
if
we
think
about
ai
features
and
how
it
can
benefit
gitlab.
I
think
of
a
couple
features
that
we
can
incorporate
ai
to
improve
the
experience
such
as
pipeline
optimization.
A
A
For
example,
docker
files
are
usually
not
properly
optimized
for
caching,
meaning
your
build
will
take
way
longer
compared
to
when
you
have
a
proper
caching
strategy,
and
there
are
plenty
more
more
ideas
that
we
can
think
of
I've
written
down.
A
few,
the
one
that
I'm
going
to
focus
on
is
code
optimization
and
in
the
beginning,
specifically
focused
on
security
practices.
A
Why
should
we
do
that?
Well,
there
are
a
couple
of
reasons.
The
first
one
is
code.
Optimization
is
a
very
broad
and
explorative
subject.
It
can
go
in
different
directions,
so
it's
very
explorative
and
therefore
it's
a
good
fit
for
incubation
engineering
compared
to,
for
example,
a
build
optimization,
which
is
much
more
narrow
than
code
optimization.
A
So
it's
not
that
we
have
to
bias
against
introducing
something
new,
because
we
already
are
familiar
with
these
tools
and
our
users
as
well,
so
we're
just
extending
upon
that
knowledge.
Already
third
point
is
security
is
becoming
increasingly
important.
Every
week
you
will
hear
of
some
breach
or
some
incident
or
some
vulnerability,
so
it's
important
that
we
help
our
users
write
secure
code.
A
The
fourth
point
is
that
it
allows
integration
with
existing
tools
and
therefore
boosting
their
efficiency.
We
already
have
a
lot
of
tools
within
gitlab
that
are
analyzing
code,
for
example,
or
could
be
used.
Maybe
users
are
not
using
it.
So
by
optimizing
their
code,
we
could
connect
to
these
tools
and
create
more
exposure
for
them.
A
I
haven't
experienced
any
of
these
tools,
yet
in
gitlab.
That's
something
that
I'm
going
to
do
next
week,
so
this
point
4.4,
I
might
have
to
come
back
on
it
based
on
the
learnings
that
I
will
do
next
week.
So
our
our
mission
is
basically
our
big
audacious
area
goal
and
that
is
to
create
an
assistant
that
will
help
users
write
secure
code
and
educate
them.
Why
this
is
secure
code
and
why?
What
they
were
doing
before
is
insecure.
A
This
is
also
in
line
with
the
iteration
flow
and
also
the
lean
startup
methodology.
The
first
assumption
is
that
users
want
one
gitlab
to
proactively
help
them
detect
optimization
to
their
project.
I
think
this
is
key,
because
we
can
suggest
anything,
but
if
users
are
not
willing
to
accept
our
help,
they
will
never
use
it.
A
The
second
point
is
that
we
are
capable
of
providing
meaningful
suggestions.
This
assumption
is
a
bit
more
fake.
The
reasoning
behind
it
is
that
there
are
currently
already
a
lot
of
linters
code,
analyzing
tools,
security
tools,
but
they
create
a
lot
of,
let's
say
noise,
because
if
you've
ever
looked
at
a
container
registry,
you
will
see
many
many
warnings
about
vulnerabilities,
either
severe
or
insignificant,
and
a
lot
of
those
are
ignored,
either
by
policies
set
within
the
company
or
by
knowledge
that
it
doesn't
really
or
it
doesn't
pose,
a
problem
for
that
specific
situation.
A
So
we
need
to
figure
out
how
we
can
provide
meaningful
suggestions
like
we
need
to
learn
what
is
and
isn't
a
problem
or
a
good
suggestion
for
a
user.
The
third
assumption
is
that
the
users
are
willing
to
accept
and
or
provide
feedback
on,
our
suggestions.
So
this
this
point
is
crucial
for
our
self-learning
capabilities.
So
if
we
provide
a
suggestion
to
a
user,
ideally
we
would
want
them
to
accept
it
or
provide
feedback
on
it.
So
we
know
if
our
suggestion
was
good
or
bad,
so
breaking
it
all
down
into
our
strategy.
A
A
Currently,
if
you
have
any
kind
of
linter
which
will
run
in
your
cisp
and
there
is
a
problem,
your
step
fulfill
and
you
have
to
go
to
the
logs
so,
for
example,
this
one
is
correct,
but
all
of
the
output
usually
is
right
somewhere
here,
for
example,
this
one
all
entries
have
a
gitlab
username,
but
if
it
didn't
it
would
be
a
similar
output,
but
in
this
case
a
file
or
it
would
have
a
file
and
line
number.
A
If
we
start
to
integrate
the
output
into
the
actual
repository,
we
could
see
our
output
of
linters
only
the
the
the
warnings
and
errors
in
our
repository,
and
it
will
become
probably
much
easier
to
figure
out
what,
if
there's
an
issue,
and
we
could
start
linking
it.
Also
to
external
information
like
why
this
is
an
issue
based
on
the
first
two
steps.
We
need
to
figure
out
what
the
shortcomings
are
because
step
one
in
any
machine
learning
project
is
to
determine
what
is
actually
the
problem.
A
So
if
from
our
earnings,
we
figure
out
that
there's
a
there's,
a
gap
or
certain
features
or
certain
aspects
are
not
being
detected
properly
or
then,
then
we
have
our
problem.
We
need
to
start
detecting
those
or
we
need
to
start
figuring
out
what
is
x.
What
is
the
noise?
How
can
we
filter
that
out
for
step
four?
A
This
probably
can
tie
in
with
point
three
not
sure
which
one
which
will
be
the
correct
order,
but,
as
I
said,
the
first
two
will
create
a
lot
of
people
create
a
lot
of
irrelevant
warnings
due
to
the
specific
situation
of
a
user.
So
we
could
look
into
training
a
machine
learning
model
to
determine
the
relevancy
of
a
specific
problem
to
the
user.
This
can
be
fed
by
just
a
rule-based
system
based
on
what
the
company
has
already
set
out
for
what
they
care
and
what
their
specific
situation
is.
A
A
We
perhaps
could
build
our
own
linter,
mostly
based
on
the
learnings,
that
we
get
from
experimenting
with
these
lead
printers
and
receiving
the
feedback
from
the
users
that
that
later
will
still
just
run
in
the
cicd
part.
And
in
the
next
point
we
could
look
into
integrating
the
gitlab
lender
into
ids
for
real-time
suggestions.
A
That
could
also
become
very
relevant
for
pipelines.
For
example.
Sometimes
you
write
a
pipeline
and
the
linter
can
already
detect
that
it
won't
work,
and
the
current
gitlab
scienter
already
would
detect
that
it
is
a
problem,
but
you
wouldn't
know
that
right
away,
you
would
have
to
copy
your
pipeline,
put
it
in
the
linter
on
gitlab.com
and
then
copy
it
back
over.
A
And
the
last
point
comes
back
to
the
big,
audacious
hairy
goal
or
our
mission,
and
is
that
whatever
we
have
learned
and
created
in
the
first
seven
steps,
we
will
evolve
into
gitlab
assist.
So
what
could
gitlab
assist
be
for
years
developers
have
been
exposed
to
code
completion
tools
such
as
the
delhi
sand,
autocomplete.
Basically,
every
ide
has
it
right
now
and
started
to
incorporate
ai
into
code
completion.
A
A
few
examples
deadline
guide
but
of
course,
also
co-pilot
from
github.
These
products
are
very
illustrative
of
what
people
would
consider
to
be
an
ai.
It's
it's
almost
magic
right.
You
type
something,
and
this
whole
text
appears
your
code.
You
could
accept
it.
Maybe
it's
useful.
Maybe
it
isn't,
but
there
are
a
few
concerns
most
of
the
applications
currently
out.
There
are
using
gpd,
so
basically
the
same
model
and
making
tweaks
call
that
it
takes
tremendous
amounts
of
data
and
compute
to
train
structure
model.
A
This
is
where
it
gets
a
bit
more
tricky.
There
are
two
issues
with
the
input
data
first
off,
there's
a
limited
set
of
data
and
everyone
has
access
to
it.
So
it's
a
plain
level
playing
field.
The
the
other
concern
is
that
public
repositories
are
not
perfect.
Some
of
them
are
not
optimized
for
security,
nor
for
coding,
best
practices
they're
just
a
place
also
for
a
lot
of
people
to
just
store
code.
A
The
last
point
is
that
it's
one
thing
to
suggest
code,
but
it's
another
thing
to
educate
users,
why
it
is
good
code.
All
in
all,
I
think
the
chances
of
code
completion
becoming
a
commodity
are
fair,
even
with
the
help
of
ai,
it
might
be
worthwhile
to
take
it
a
little
higher,
for
example,
full
boilerplate
templates.
A
It
can
adhere
to
company
guarding
guidelines,
be
aware
of
licensing
so
that
that
is
basically
what
I
had
written
down
based
on,
based
on
all
chats
that
I
had
with
different
people,
but
also
just
thinking
about
it
and
bringing
sermon
over
this
idea
and
writing
something
down.
I
think
the
the
aim
was
to
start
small
and
start
iterating
and
collecting
feedback.
A
So
if
you
watch
this,
please
provide
feedback.
I'm
happy
to
discuss.
You
can
comment
on
the
merch
request,
I'm
gonna,
I'm
still.
I
still
need
to
create
a
project
issues
that
kind
of
stuff.
If
you
have
any
suggestions
on
what
I
should
do,
where
I
can
start
where
I
can
any
tips
on
creating
these
issues,
let
me
know
I'm
available
on
slack.
Thank
you
very
much.