►
Description
How can we keep critical code private but still use modern cloud technologies? This is a experience report including self-hosting and the development of medium to large size projects.
--
Stefan Schindler started over 15 years ago on the hardware deployment side of IT and has ever since become an OSS maintainer and Lead Software Engineer for tipi.build working on the next generation of compiler toolchains for reliable software.
A
Something
from
a
Linux
system
called
the
steam
deck.
Oh
there's
so
much
delay
yeah.
It
will
be
nice,
it's
a
I,
think
a
six
or
an
eight
core
CPU.
So
it's
an
easy
gaming
handheld
and
now
it's
running
presentation,
yeah!
You
can
follow
my
blog.
What
do
I
do
so
I'm
here
in
lens
to
finish
my
master's
degree
should
be
done.
Soonish,
so
yeah
end
of
the
year,
definitely
probably
end
of
the
month.
I
maintain
couple
open
source
crates
on
the
rust
side.
A
A
Starting
a
new
job
tomorrow.
That's
why
I'm
moving
away
from
Lins,
so
the
classical
disclaimer
opinions
are
my
own,
my
employer
TP.
They
are
in
the
similar
space,
but
yeah
don't
speak
for
them
today
and
in
my
spare
time,
I
organized
another
rust
Meetup
called
turisi,
which
is
the
lake
of
Zurich.
So
all
around
that
Lake
there's
also
talks
and
rust.
Fifth,
if
you
want
to
join
the
rust
Fest
family
event,
you
can
in
October
in
Bristol.
A
A
Let's
go
to
the
table
table
of
contents
or
not
because
I
don't
think
you
can
read
it
and
for
the
people
in
the
Stream
you
will
just
get
what
you
get.
So,
let's
not
waste
any
time
so
continuous
integration
CI,
it's
basically
a
thing
to
execute
code
on
another
machine.
That's
why
I
highlighted
it
here.
It's
a
remote
code,
execution
very
important.
A
A
A
Why
is
it
so
very,
very
handy
because
it's
bundled
with
the
repository
if
I
have
15
different
versions
or
branches
or
on
pull
requests
and
merge
requests
and
all
of
the
things
and
small
things
change?
It
will
always
build
with
the
same
code,
even
if
I
change
the
CI
infrastructure
itself.
So
that's
very,
very
nice,
and
then
we
need
some
Advanced
features
right
like
we
need
a
specific
environment
variables.
We
need
to
have
secrets
for
deployment
or
for
pulling
down
proprietary
data
sets.
A
We
have
caches.
Sometimes
that
would
be
very
nice
and
we
may
have
artifacts.
So
the
difference
between
those
two-ish
cache
is
read,
write
and
you
can
manually
delete
it
per
repository
and
artifacts
are
outputs
that
are
generally
uploaded
to
something
else
like
a
website
through
the
gitlab
functions.
There
are
similar
stuff
on
GitHub,
but
it's
called
slightly
different.
A
We
also
have
variables
in
the
scheduling.
That's
basically
the
templating.
Oh,
is
the
mic
back
on?
No,
it's
not!
No
they're
fun
am
I
gone
when
did
I
clip
out.
A
A
Did
we
just
lose
the
Stream
okay,
so
yeah
I
was
with
the
templates,
so
we
can
have
symbols
in
our
yaml
file
that
will
be
interpreter.
Yes,
I
hear
myself
again
very
good
and
another
very
nice
feature
is
triggering
events
in
different
repositories.
A
Sadly,
I
cannot
show
that
code
because
it's
from
one
of
my
clients,
but
it
basically
works
like
this.
They
have
a
open
source
base
installation
and
whenever
CI
starts
to
run
on
the
main
on
Def
and
staging
branches,
it
will
send
signals
to
a
deployment
repository
with
all
the
secrets
and
all
the
configuration
for
the
production,
and
that
part
is
very
private.
A
A
Now
how
do
we
think
about
stuff?
So
we
have
one
gitlab
CI
dot
yaml
file,
which
is
called
a
pipeline.
This
is
one
thing
that
runs
through
inside
that
pipeline.
We
can
have
stages
that
are
just
blocks
and
we
Define
the
stages
in
order
and
they
run
in
that
order,
but
because
that's
too
easy
there's
also
another
mechanism
called
depends
on.
So
we
have
a
full
dependency
graph,
which
of
course
we
can
build
loops,
and
then
it
will
crash
or
just
hang.
A
So
they
have
these
two
mechanisms
I
personally
liked
the
stages
because
they're
easy,
but
the
graph
is
also
very
nice
because
it
draws
you
a
nice
picture
and
it
auto
hides
stuff,
and
it's
very
similar
to
github's
Matrix,
where
you
can
tell
make
me
a
matrix
over
processor
architecture
and
operating
system,
and
then
it
just
wants
16
jobs
at
once.
For
all
the
combinations
or
more
another
thing
is
we
have
per
gitlab
instance,
so
the
web
UI
that
stores
the
code?
A
We
have
shared
Runners
and
non-shared
Runners,
and
if
you
have
code
on
the
public
gitlab.com
instance,
I
would
recommend
just
add
your
own
Runner
to
projects
that
compile
often
or
take
long
because
they
reduced
the
amount
of
free
time
you
have
on
the
system
and
yeah.
If
you
don't
want
to
be
surprised,
if
you're
close
to
this
limit,
it
used
to
be
2
000
minutes
per
month,
I
think
it's
less
now
just
add
your
own
system.
Let
your
system
run
all
the
the
jobs
and
there
will
be
no
more
surprises
fun
fact.
A
A
Right
then,
you
can
just
have
it
running
on
the
background
could
even
be
on
a
laptop
and
with
some
smart
trickery.
You
could
set
it
up
in
a
fashion
that
the
runner
would
not
run
when
it's
unplugged
from
the
power,
for
instance.
So
when
you
code
on
the
go
and
then
go
home
to
your
office,
just
plug
the
machine
in
and
they
will
realize
oh
now,
I
can
process
the
queue
and
it
will
run
all
your
CI.
Gitlab
is
smart
enough
to
recognize
when
a
job
is
no
longer
needed.
A
A
If
we
compare
that
to
some
some
random
VM,
which
is
like
five
euros
a
month
and
looking
at
it
for
like
half
an
hour
every
month,
just
to
apply
updates
for
unlimited
amount
of
users,
it's
a
pretty
easy
sell
even
to
your
boss.
Disk
space
just
put
something
that
has
more
than
five
gigabytes.
10
is
probably
tight.
50
good
enough.
A
Networking
is
usually
free
with
servers
and
yeah,
because
you
usually
pay
for
outgoing
traffic
and
a
gitlab
runner
is
usually
pulling
a
lot
of
data
in
so
in-brand
traffic
is
usually
free
with
all
the
servers,
so
that's
very
nice,
and
for
caching
you
can
use
an
S3,
compatible
storage,
don't
have
to
use
AWS,
oh
and
if
you
use
AWS,
be
aware
that
they
charge
you
for
traffic
within
their
data
centers.
If
the
data
centers
are
in
different
zones,
and
that
can
be
very
expensive
just
yeah
be
aware
of
that.
A
So
another
project
where
I
advise
sometimes
now
is,
is
trapped
because
they
have
a
lot
of
storage
in
one
bucket
and
another
Huge
lot
in
another
one,
and
they
cannot
synchronize
them
anymore.
So
they
just
manually
synchronize
stuff
that
they
know
this
is
wrong,
but
yeah
they
cannot
afford
to
just
download.
Both
sides
compare
and
fix
it
automatically,
because
it's
a
couple
terabytes
and
it's
just
very
expensive.
A
Another
thing:
privacy,
safety,
security,
deployment,
Keys-
is
a
good
thing
license
key
is
another
thing,
sometimes
you're
not
even
allowed
to
run
certain
software
on
certain
CPUs.
So
if
you're,
for
instance,
have
a
I
don't
know
Adobe
license,
that
is
for
16
cores
and
the
shared
Cloud
spawns
you
on
a
64
or
96
core
machines.
You,
sadly
breach
of
license,
would
be
really
hard
to
blame
you
for
that,
but
kind
of
just
put
some
old
books
in
in
the
corner
and
let
that
run
it
yeah
doing
stuff
local
self-hosting
yay.
A
So
here
we
see
a
typical
flow
from
the
runner's
perspective
and
it's
all
http
who
remembers
long
polling
or
a
couple
so
for
the
young
ones.
This
is
before
web
sockets
was
a
thing
yeah
and
sadly
it
still
works.
It's
still
need
it.
Yeah.
It's
a
long
polling
uses
the
TCP
timeout,
which
is
still
30
seconds,
so
it
waits
25
seconds.
A
And
if
there's
no
data,
the
server
will
reply
with
no
data,
and
then
the
client
will
pull
again
and
do
that
over
and
over
until
there
is
data-
and
there
is
a
chance
that
instead
of
waiting
30
seconds
for
the
event
that
happens
on
the
server
side,
we
wait
10
seconds
and
then
we're
20
seconds
faster.
It's
still
slower
than
lap
sockets,
because
we
have
to
reconnect
all
the
time
but
yeah,
that's
the
thing,
the
right
side.
A
The
far
right
side
is
the
executor,
that's
communication
that
depends
on
the
plugin
and
what
plugins
are
these
yeah?
It's
usually
docker.
Let's
be
honest.
Most
people
running
is
a
Docker
container
or
have
it
as
a
systemd
service,
which
I
personally
find
very
handy
because
with
system
TV
can
restrict
the
service
a
little
more
without
touching
the
service
itself,
so
we
can,
for
instance,
say
private
temp,
so
the
service
no
longer
sees
slash
TMP
from
the
system,
but
it's
locked
away
in
some
random
directory
that
will
change
every
restart.
A
A
A
You
see
this
is
a
distribution
dependent.
So
when
you
are
on
testing,
for
instance
like
Debian
testing
or
the
I,
don't
remember
what
the
Ubuntu
Leading
Edge
is
called.
There
is
a
chance
that
when
the
testing
name
gets
announced,
your
updates
will
fail
for
a
couple
weeks
until
gitlab
itself
has
a
new
folder,
so
you
can
also
hard
code.
The
distribution
release
or
override
it.
Ansible
is
highly
customizable,
sometimes
a
little
bit
too
much
and
you
can
override
almost
anything
so
the
executors
so
the
first
two
SSH
and
shell
are
the
most
dangerous.
A
A
You
shouldn't
use
that
unless
you
have
a
hundred
percent
control
and
don't
accept
any
untrusted,
merge
requests
at
all
another
one
is
parallels
or
virtualbox,
where
you
spawn
a
virtual
machine.
There
used
to
be
one
for
KVM,
but
it
was
discontinued.
Sadly,
there
was
also
one
for
extend
also
discontinued.
A
There
is
the
docker
one
and
if
you're
just
simulink
the
docker
binary
to
podman,
you
don't
have
to
worry
about
anything
because
Portman
is
API
compatible
and
there's
a
Docker
machine
plug-in,
which
is
the
same
as
Docker,
but
just
starts
a
VM
first,
that
one
is
really
handy,
but
it's
experimental.
Sadly
yeah
and
of
course
we
have
the
kubernetes
one
and
kubernetes
is
great,
because
you
already
have
namespaces
and
whatnot
and
yeah.
A
A
A
A
This
syntax
has
been
deprecated
so
now
it
will
just
reinterpret
it
as
test
colon
cargo.
Well,
it
works
it's
just
one
stage:
that's
all.
It
is
I
recommend
outputting
the
compiler
versions,
no
matter
what
language,
but
if
you
use
node
just
say
node
version
or
Java
or
whatever
this
is
printed,
it
will
be
locked
and
then
you
have
a
much
easier
time
figuring
out.
Why
some
feature
is
broken
or
it
gives
you
random
errors
or
invalid
syntax
to
check
gift
version
is
correct.
A
A
A
Okay
next
example.
Parse
end
parseint
is
a
little
bit
more
complicated
because
we
have
features
now
and
we
have
nightly
the
difference
between
nightly
and
not
nightly
is
the
image
line.
So
we
just
add
whoops,
oh,
come
on
double
click
you're
at
attack
nightly
at
the
end.
That's
it
that's
how
we
change
images
and
for
features.
We
just
run
the
test
without
Flags
and
with
flags
yeah.
A
Then
Fork
actual
test
coverage
again:
printing,
the
rust
version,
testing
the
whole
workspace,
this
workspace
flag.
This
is
very
handy
so
because
it
will
tell
cargo
yeah
yeah
go
search,
go
deep
test,
all
the
things,
and
then
we
can
modify
the
output
format.
You
can
see
here
we
have
whoops
Json,
then
we
have
unstable
options.
That's
why
different
compiler
report
timings
and
then
here's
a
normal
shell
pipe,
oh
yeah,
and
these
little
squiggles
at
the
end.
It's
just
new
lines
that
I
had
to
insert
because
it
wouldn't
fit
on
the
slide.
A
And
then
we
calculate
coverage
by
just
running
a
normal,
build
and
then
have
llvm
coverage
to
the
actual
coverage
reporting
but
yeah.
We
need
to
have
the
tests
running
first,
because
llvm
coverage
will
fail
weirdly.
If
the
rust
code
is
somehow
unsustainable
like,
if
you
have
syntax
errors,
just
in
the
tests,
it
will
compile
under
normal
circumstances,
but
then
the
coverage
will
fail
because
there's
an
error
in
the
test.
So
we
need
to
run
the
test
first
and
then
build
again,
but
hey
the
CI
does
it.
A
We
will
never
forget
it
and
it'll
just
work
and
remember
the
artifact.
This
is
an
artifact,
so
the
output
of
the
artifact
is
some
XML
and
sadly
the
latest
version
of
gitlab
broke
this
a
little
bit.
I
haven't
fixed
it
yet,
but
you
could
just
tell
it
here's
the
report
template
for
junit,
which
is
a
Java
testing
framework.
A
Yeah
I
will
fix
that
some
point:
there
is
other
formats
I,
don't
remember,
I,
think
four
or
five
different
coverage
formats
are
supported.
A
Maybe
I
don't
remember
yeah,
but
you
can
find
it
in
the
docs
which
ones
are
supported
now.
Another
example
is
color
Blinder,
it's
it's
tool,
I
wrote
couple
years
ago
to
simulate
color
blindness
and
different
kinds,
and
then
it
will
output,
a
picture
or
a
bunch
of
pictures
or
a
combined
picture,
whatever
you
like
and
for
the
pure
rust
version.
It's
fine
just
like
sudo,
but
for
the
UI
for
the
gtk
version.
A
We
need
some
dependencies
and
now
we
have
this
before
script
and
this
one
will
get
run
before
every
drop
and
I
use
that
to
set
up
the
environment,
it's
Debian
based,
which
is
very
nice,
so
we'll
just
tell
update,
upgrade
and
then
install,
and
then
this
is
very
handy.
Install
no
recommends,
so
we
don't
get
extra
stuff.
We
don't
need
just
gives.
A
The
smallest
footprint
and
smaller
footprint
means
faster,
download
times
faster
runtime
for
the
container,
and
then
we
install
the
libraries
we
need
like
Pango
and
gtk3
should
update
that
the
gtk4
at
some
point
and
then
we
just
run
our
normal
script.
Just
cargo
test
workspace.
There
you
go
and
it
just
runs
through
and
of
course
we
can
play
with
stuff.
So
one
of
the
additional
features
is
deployment
and
yeah
I
have
heard
of
pages.
Probably
you
know
GitHub
has
it
gitlab?
Has
it
it's
a
static
site
hosting
service?
A
It
used
to
have
some
bugs,
for
instance,
when
you
would
trash
the
web
service,
it
would
just
be
empty,
and
then
you
had
to
rerun
all
the
jobs,
but
that's
pretty
easy.
You
just
go
into
the
latest
one
and
then
there's
little
I
can
just
press
it
and
it
runs
the
job
again,
and
the
nice
thing
is,
it
will
run
it
at
the
same.
Commit
with
the
same
environment
will
be
very
much
the
same
unless
we
pull
something
from
the
outside
like
time
or
download,
will
output
the
same
result
in
the
end?
A
So
how
do
we
deploy
stage?
0
is
setting
up
some
sort
of
trust
right
and
SSH
is
really
great
for
that,
because
we
can
have
verification
in
two
ways.
So
we
have
one
token
it's
the
SSH
deploy
token.
This
is
basically
a
private
key
which
we
can
generate
on
our
machine
with
SSH
keycan
and
the
output
of
that
will
be
the
value.
It's
just
a
one-liner
and
it
will
be
saved
in
the
other
side.
Is
the
server
and
we
don't
have?
A
How
is
it
called
the
x509,
the
TLs
trust
chain
right?
So
we
don't
have
that
we
don't
have
a
trust
anchor
with
SSH,
so
we
just
ship
it
ourselves.
We
just
record
the
known
host
files,
put
it
into
the
environment,
save
and
then
we
are
ready.
We
have
another
script
again,
just
set
it
up.
We
install
rsync,
we
install
SSH,
we
create
the
folder,
we
place
them
in
there.
This
is
an
old
script,
because
in
the
beginning,
you
could
only
have
environment
variables.
A
Nowadays
you
can
have
files,
but
it's
a
little
tricky
if
the
folder
doesn't
exist,
so
I
still
keep
it
that
way.
Then
we
change
the
access
permissions.
Then
we
have
some
make
deploy.
This
can
be
whatever
in
the
project.
In
this
specific
one.
I
use
make
files,
because
I
have
a
lot
of
reoccurring
symbols,
so
I
use
make
files
templating
as
well
and
then
in
the
end,
I
just
remove
it,
and
this
might
sound
silly
because
the
container
should
get
deleted,
but
with
any
garbage
collection.
There's
no
timing
guarantees.
A
A
If
you
have
rust
version,
167
and
older,
it
will
copy
the
whole
registry
all
the
time
and
that
one
I
don't
know
how
big
it
is.
But
it's
couple
hundred
megabytes,
so
it
will
just
download
the
index
all
the
time
and
then
all
the
dependencies,
also,
which
is
great
for
your
stats
on
crates.io,
but
it's
really
slow
so
on
a
big
project
I
had
with
arctics
it
just
was
pulling
15
minutes
of
dependencies
and
then
building
because
it
had
the
debug
cash
already
testing
everything.
A
It
was
just
three
minutes
of
that,
so
enabling
registry
cash
with
this
handy,
Little,
Sim,
link
nope,
doesn't
want
to
Triple
click
this
one
it's
little
hockey,
but
it
works
great.
So
it
just
say
here
move
that
into
before
script
is
important,
because
if
we
don't
do
it,
the
cargo
command
will
already
have
created
the
structure
and
with
this
it
will
be
redirected
in
our
cache.
A
So
that's
very
nice.
We
could
also
cache
the
release
binary
right,
but
I
use
that
for
deployment
and
there's
some
weird
bugs,
sometimes
with
caches,
on
release
mode
and
I
want
to
have
a
reliable
deployment
and
I
don't
deploy
every
commit,
maybe
I
deploy
every
fifth
commit
or
something
so
I
rather
wait.
The
full
build
for
release
mode,
which
is
I,
think
nine-ish
minutes
yeah.
The
other
one
is
the
release
mode
yeah
nine
minutes,
instead
of
20
still
good,
still
an
improvement
by
50,
but
I
don't
have
weird
artifacts
and
I.
A
Great
great
yeah
I
see
git
love,
yeah
gitlab
has
great
runners
in
general,
but
they
run
in
the
Google
cloud
and
I.
Don't
trust
the
Google
Cloud,
so
yeah
I
won't
use
them,
but
sometimes
it's
funny.
If
you
have
a
runner
on
old
discs
and
not
enough
RAM,
it
will
be
very
slow
because
the
discs
are
just
slow
and
sometimes
you
notice,
weird
lags
in
the
lock
and
that's
when
there
is
a
flush
somewhere.
A
So
if
any
part
of
the
whole
chain
does
a
flush
like
pip
does
at
the
end
of
installing
dependencies
or
some
programs
to
it
when
they
crash,
they
write
the
core
dump
and
then
they
flush.
It
will
get
weird
timing,
artifacts
in
the
environment,
so
I
think
we
waited
half
a
minute
and
I
don't
see
anything
oh
yeah,
moving
on
so
an
S3
cache
bucket
GitHub
has
a
very
good
retention
policy
for
artifacts,
it's
just
30
days
and
the
latest
one.
So
we
don't
have
to
worry
about
that.
A
A
So
what
I
mean
is
when,
when
the
client
starts
pulling
a
cache
file,
as
long
as
the
client
receives
the
cache
file
fully,
it
will
be
fine
if
we
delete
it
right
after
because
the
clients
will
The
Client
means
Runners,
the
runners
will
just
search
according
to
a
scheme
on
this
S3
pocket.
Is
there
a
cache
for
this
repository
for
this
job?
We
can
even
add
our
own
mechanisms
and
templates
for
this
pattern.
A
A
Yeah,
so
the
question
was:
why
create
a
remote
cache
and
not
a
local
cache?
It's
a
very
good
question.
In
the
beginning,
you
could
do
that
so
I
had
a
folder,
slash,
cache
and
I
would
just
throw
everything
into
that
one
I,
don't
know
why
it's
probably
to
do
with
the
kubernetes
support,
but
I,
don't
know
I'm,
also
a
little
sad,
because
now
I
would
have
to
run
another
service
just
for
the
cache,
which
seems
silly
so
yeah
currently
I'm,
not
caching.
For
most
of
my
stuff,
it's
just
the
stuff.
A
A
A
Yeah,
so
someone
from
the
audience
had
their
own
experience
with
caching
and
apparently
caching,
all
the
things
took
longer
than
the
game
was
with
the
cash
yeah
sad,
but
yeah
was
NTFS
involved
with
any
of
that.
A
Okay,
yeah
yeah.
If
you
run
NTFS
and
lots
of
small
files,
you
will
be
sad,
so
Microsoft's
latest
recommendation
is
use
their
new
file
system,
the
reassess
in
a
second
partition,
because
you
cannot
boot
from
that.
Currently,
it's
supported
in
Windows
10
with
the
latest
Service
Pack
I
mean,
what's
it
called
now:
it's
not
Service
Pack,
yeah
and
windows.
11
supports
it
too,
and
the
server
as
well
the
latest
one,
oh
okay.
So
of
course
not
the
home
edition
you're
just
talking
to
professionals,
I
mean.
A
It's
just
one
binary
just
downloaded
configure
it.
Please
set
a
password.
If
you
do
it
like
don't
be
like
the
CIA.
That
loses
lots
of
spy
data
over
an
S3
bucket
or
the
TSA,
which
uses
the
no-fly
list
over
an
F3
bucket
yeah,
and
so
many
others
yeah.
Please
set
the
passwords.
We
don't
need
another
FTP
leak.
Oh
it
doesn't
scroll
sad.
Okay.
A
A
We
use
a
different
pattern
than
for
the
testing
half,
and
so
we
can
prefix
stuff
if
we
want
to
otherwise
all
jobs
in
one
repository
share
the
cash
but
as
I
said
before,
I
would
not
cache
production
releases,
anyways,
so
yeah,
but
you
can
so
conclusions
yeah
in
my
opinion,
running
a
runner
is
very
low
effort.
It
just
works.
You
get
updates
every
now
and
then,
but
I
have
never
had
any
trouble
and
manage
this
free.