►
Description
Tweet to start: https://twitter.com/axccl/status/1347665411722989568
Repository: https://gitlab.com/tonka3000/conan-cpp-example
For loops in GitLab YAML with parent/child pipelines: https://gitlab.com/instantlinux/docker-tools/-/blob/master/.gitlab-ci.yml#L38
A
A
A
B
Yeah,
of
course,
so
I
also
read
the
tweet
nearly
at
the
same
time
as
you,
you,
you
read
it
and
I've
used
your
recommendations
to
try
to
build
a
repository
to
demonstrate
how
it
is
possible
to
use
this
api
dump
stuff
so
that
you
compare
two
different
apis
and
the
tool
which
mr
axel
huble.
C
B
B
Like
you
can
click
here,
and
here
you
can
see.
There
are
two
main
tools
for
that.
There's
one
called
abi
dumper,
which
is
the
name,
suggests
dumps
the
api
into
a
file
can
be
used
directly
from
any
packet
manager
which
is
available,
but
can
also
be
used
as
a
docker
image,
of
course,
and
then
there
is
a
second
tool
which
is
called
api
compliance
checker,
which
is
more
or
less
based
on
these
stumps.
B
So
you
give
this
tool
two
different
dumps
and
it
will
calculate
and
report,
which
is
the
difference
between
which
things
changed
like
there
is
an
api
removed
which
previously
was
public
or
something
like
that,
and
so
I
just
try
to
use
this
and
import
this
to
gitlab
and
it's
not
100
complete,
because
I
want
that
we
can
build
something
today.
B
But
the
general
thing
is
I've
just
built
a
completely
simple
example,
so
based
on
cmake,
because
I'm
know
simic
very
well.
B
The
pure
merch
request,
aka,
pull
request
basis
and,
as
I
said,
simple
project
just
the
api
stuff.
I
tried
a
little
bit
around
and
the
hardest
thing
for
me
per
se
was
to
convert
the
information
from
the
api
dump
or
api
compliance
checker
to
something
which
you
can
upload
to
gitlab,
because
there
seem
to
be
some
restrictions
in
the
api,
so
you
can
copy
paste
some
html
code
in
the
markdown
in
the
in
the
in
the
website,
but
the
api
is
very
restrictive
there,
so
I've
built
my
own
wrap
around
it.
B
B
I
reduce
the
html
from
the
given
report
and
the
second
one
is
this:
where
I
can
use
the
xml
output
of
the
api
compliance
checker
and
convert
it
into
something
which
you
can
read.
Of
course,
and
I've
tried
to
make
merch
request
here
and
you
should
see
if
I
left
it
there
yeah.
B
This
is
the
thing
which,
when
you
push
something
to
the
branch,
I've
demonstrated,
how
which
is
the
base
commit,
which
is
the
head
commit,
and
I've
just
used
the
from
the
report.
The
binary
compatibility
table
and
the
source
compatibility
table
there
is
much
more
behind
it,
so
I
can
show
you
the
full
report,
so
the
tool
do
much
more
than
just.
B
So
this
is
the
original
report,
so
you
can
choose
between
binary
compatibility
and
source
compatibility,
binary
compatibilities,
of
course,
ebi
and
source
compatibility
is
api
and
in
normal
programming
you
can
break
the
api
and
have
a
stable
api
in
via
versa,
not
an
okay.
In
many
cases,
when
you
break
the
api,
you
break
the
api,
but
there
are
cases
where
this
is
not
the
case,
and
here
you
can
see
the
test
results
and,
of
course
I've
never
changed
something
in
the
branch.
So,
of
course
I
didn't
change
anything.
B
So
it's
green
hundred
percent
is
gone
through.
I
have
no
problems
at
all
and
to
show
you
this
little
thing:
the
black
magic
is
behind
the
gitlab
eiffel.
Of
course,
what
I've
done
is
I've
just
used
the
docker
image
from
from
conan.
So
that's
my
main
packet
match
I
use
for
for
c
plus
for
stuff
on.
They
have
neo
for
many.
B
Hey,
please
put
out
this
html
file
and
the
second
call
is
the
same
call,
but
with
the
dash
xml
and
then
I
get
xml
representation,
then
I
use
the
ci
merge
request
iid
to
let
my
script
convert
the
stuff
and
upload
it
to
a
the
symmetryquest
via
the
gitlab
api,
and
I
skipped
one
part:
that's
the
part
where
I
just
built
the
stuff,
because
there
are,
it
seems
to
be.
There
are
two
modes
in
in
this
api
dumper.
There
is
one
based
on
just
header
files
and
one
based
on
the
on
the
binary.
B
B
B
Of
course
I
have
only
one
the
lib1
file
and
it
uploaded
to
the
artifacts,
and
I
built
the
second
one
which
built
the
whole
thing
and
there
I
used
the
see
what
michael
mentioned
in
the
in
the
tweet,
the
ci
merge
request,
diff
base
sha
to
build
to
check
out
the
the
base,
commit,
build
it
and
also
collect
this
file,
and
in
my
api
stage
I,
as
I
showed
you,
use
this
to
feed
the
compliance
checker
and
the
dumper
to
produce
them
the
file.
B
So
that's
the
main,
the
rough
concept.
I'm
not
that
aware.
You
can
use
many
techniques
to
optimize
this.
It's
just
a
simple
step,
because
if
you
have
a
bigger
project-
and
I
think
building
always
the
old
stuff
is-
could
be
really
a
performance
problem,
but
I'm
not
using
gitlab's
cache
that
much.
So
I'm
not
an
expert
in
that.
B
So
I
think
you
can
cache
the
old
results
and
use
the
cache
normally,
and
if
the
cache
is
not
there,
then
you
can
build
it.
So
it's
a
kind
of
performance,
optimization
there
and
was
what
was
a
really
interesting
problem
for
me,
which
I
never
used
before
in
gitlab.
Is
that
the
I
use
the
only
thing
you
can
also
use,
of
course,
the
the
newer
one,
the
the
rules
stuff.
B
I
use
the
whole
the
only
tag
for
the
merchant
stuff
and
the
first
thing
which
I
have
I
say
troubles
with
it
was
how
gitlab
handles
this,
because
I,
in
the
beginning
I've
get
this
detached
state.
I
never
have
this
before
in
github.
B
So
what
I've
done
is
I
at
the
moment
I've
have
the
merch
request
only
for
every
of
these
three
jobs,
so
I
get
only
one
pipeline.
If
you
leave
this,
for
example
here,
then
these
are
two
pipelines
and
then
I
cannot
use
the
artifact
system
to
get
the
artifacts
from
one
job
to
another.
That
was
something
which
I
have
troubles
in
the
beginning,
because
I
never
used
this
method
before,
and
this
was
also
something
which
and
that
don't
really
see
that
fast,
because
this
variable
is
only
set
when
you
have
this
tag.
B
Maybe
mike
can
learn
more
about
this,
but
it
was
a
little
bit
tricky
for
me
and
the
only
way
I
found
when
I
have
not.
This
thing
was
to
to
require
to
request
the
from
the
commit.
I
have
to
get
back
the
merge
request,
but
I
think
that's
not
the
really
clean
way
to
do
it,
but
this
worked
for
me,
but
the
problem
is
to
find
the
root,
because
I
now
have
the
merge
request,
but
it
was
really
hard
to
find
this
out.
I'm
not
100
sure
how
gitlab
filtered
these
others.
B
This
was
really
hard
for
me
and
the
second
thing
was,
I
found
a
really
cool
thing.
I
don't
have
the
the
just
use
the
the
community
edition
on
gitlab.com,
but
there
is
one
interesting
way
to
make
much
more
of
this
available
because
github
and
it
has
this
report
stuff
and
you
can
upload
metrics.
B
This
is
normally
used
for
things
like
open,
telemetry
or
open
metrics
format,
which
is
originally
based
on
the
format
from
promisius
hope
is
correct,
and
I
convert
this
xml
report
also
to
this
stuff,
but
I
cannot
check
it,
but
the
cool
thing
about
this.
It
shows
you
the
the
the
text
file
at
the
the
top
of
your
merge
request,
because
the
way
I
choose
for
the
community
edition
is
that
I
just
push
the
merge
request.
So
the
problem
is
you
have
multiple
ways
to
do
that.
B
If
you
always
push
the
comment,
you
at
the
bottom
of
the
page,
there's
always
the
newest
one,
the
cool
thing
about
this
reboot
stuff,
it's
always
on
the
top
and
the
matrix
was
the
the
cheapest
way
to
do
that,
so
that
I
just
convert
a
simple
promisius
format
or
matrix
like
it's
called
today
and
just
fill
this
out,
and
then
you
can
see
it
at
the
top.
But
it's
a
premium
feature.
So
if
you
don't
have
a
premium
or
or
at
least
starter,
I
guess,
then
you
can't
see
this
feature
yeah.
A
Yeah,
I
was
thinking
of
caching,
caching,
the
abi
dump
or
the
object
dump,
and
only
checking
whether
it
exists
already
for
this
merch
request.
So
we
need
to
use.
Do
we
need
to
use
a
specific
key
yeah?
Probably
we
need
to
store
the
the
dump
file
with
the
merge
request
id
or
with
the
edi
the
internal
id.
A
Artifact
should
be,
the
thing
is
if
there
are
multiple
pipelines
and
I'm
producing
should
be
pipeline
pipeline
sandboxed.
So
the
idea
with
the
much
requested
doesn't
make
sense.
B
A
C
B
But
when
I
understand
the
documentation
correctly,
cache
only
sits
on
the
runner
side,
so
it
doesn't
delete
it
and
it's
not
uploaded
to
a
century
store
with
the
exception.
If
you
have
this
distributed,
cache.
C
Yeah,
if
you
have
so
typically
when
you
use
distributed
caching
for
vegetables,
is
when
you
a
really
easy
example.
Is
you
have
two
runners
and
you
want
to
share
the
cache
between
two,
these
two
runners,
and
for
that
you
can't
use
the
normal
way
because
for
attached,
first
of
all
will
be
you
put
under
slash
cache
over
files.
But
if
you
enable
distributed
caching,
it
will
be
uploaded
to
an
object,
store.
It
isn't
limited
to
s3.
It
can
be
azure
anything
else,
but.
C
No,
it's
free!
So
literally
when
you
have
to
set
up
multiple
runners,
you
can
use
this
as
three
buckets,
so
when
you're
looking
into
that
or
going
back
yeah
all
on
the
side,
it's
also,
I
don't
want
so
literally
when
you
this
is
the
configuration
of
a
runner.
So
you
need
to
configure
your
own
runner
if
you
want
to
use
the
distributed.
Caching,
oh.
C
Yeah,
but
caching
is
when
you're
using
gtlab.com
and
for
shared
runners,
distribution
is
enabled
by
default.
Oh.
C
A
Caching
is
core
so
or
the
free
version
you
can
rely
on
that
and
there
is.
A
And
I
will
get
metrics
like
the
metrics
feature
you
mentioned
before.
This
is
in
premium
only
a
premium
plus,
but
caching
and
everything
which,
which
is
required
to
run
a
ci
cd
pipeline,
is
available
for
everyone.
The
thing
is,
we
need
to
configure
the
cache,
and
this
needs
sort
of
this
thingy
and
probably
we
might
want
to
use
a
cash
key.
A
C
Yeah,
when
you
untracked
is
when
you
don't
have
it
in
vegetarian
in
the
current
stage,
but
you
want
to
catch
this,
for
example
like
the
binaries
or
so
on.
Then
you
can
use
the
untracked
stuff.
B
A
Kind
of
when
the
merch
request
is
created
for
the
is
created
you
want
to
like,
create
the
file
and
if
it
doesn't
exist,
we
we
want
to
create
it
or
we
want
to
use
it
from
the
cache.
A
B
But
I
have
always
to
run
a
gitlab
job
by
myself
to
check
it
via
the
code
or
is
there
any
built-in
mechanism
so
only
build
this,
because
I
only
start
the.
A
C
So
probably
hash
will
be
also,
no,
it
won't
be
put
into
the
next
stage.
It
will
be
shared
between
all
the
jobs
yeah.
A
A
Should
we
should
use
which
should
empathy
error
directory
for
it
and
then
cached?
It.
B
Because
I
think
you
can
use
the
same
definition
for
now,
because
I
already
built
this
old,
the
different
directory
to
have
it
easier
and.
B
C
C
A
I
have
it
on
my
mac
os,
so
I
don't
know.
A
But
I
also
have
which
test?
Is
it
a
new
local?
It's
a
shout
built
in
commands.
It
at
least
says
so
see
now
set
this
age.
Has
it
built
in
it
doesn't
matter
we
can
either
use
this
or.
A
This,
but
the
thing
is
you
want
to
check
for
the
directory,
which
is
dash
d
and
you
can
use
the
or
the
double
or
and
say
hey
if
this
doesn't
return.
True
now,
if
this
returns
true
then
do
this.
C
B
A
Submit
there
are
so
many
ways
to
roam
in
bash.
You
rather
write
write
the
price
and
script
now.
No,
I'm
bashing
bash
currently.
B
A
I
think
it's
a
german
blog
post
from
christian
quintop
who's
working
at
booking
about
why
you
don't
want
to
write
bash
scripts,
but
need
to
look
it
up.
B
C
B
C
B
A
B
A
You,
if
you
ever
need
this
to
look
up
how
how
gitlab
does
things
you
also
should
find
some.
I
don't
know
if
there's
mighty
line
in
this
specific
example,
but
I
would
say
in
the
at
least
in
the
order
devops
example:
there
should
be
something
around
this
or
in
the
in
terraform
yeah.
If
you
do
the
99
stuff
in
yammer,
you
don't
need
the
leading.
C
B
B
A
I
think
that
the
security
scanners
are
using
that
because
I
was
deep
diving
into
the
is
it
the
sauce.
B
C
A
C
Why
this
will
not
execute
one
built
lib
exists.
A
B
A
B
A
different
channel,
the
last
when
I
use
this
this
key
this
thing,
then
I
will
get
the
one
from
the
base.
I
would
guess
so
that
I
get
it.
The
question
is:
do
you
want
it
compared
to
the
to
the
last
tag?
B
To
the
branching
point
both
is
possible,
it's
just
an
absolute
value
and
relative
value,
but
I
would
personally
use
the
branching
point
because
then
you
see
just
the
difference
from
the
from
the
merch
request
and
not
from
the
whole
stuff,
because
then
you
have
always.
If
you
break
the
api
in
in
two
ways,
then
you
always
see
this
in
every
merch
request.
B
B
Because
you
have,
you
have
your
stuff,
and
so
this
is
my
branching
point.
This
is
my
branching
point,
so
I
want
to
have
it
pre,
suffix,
your
prefix
or
whatever,
with
this
key.
So
then
I
can.
I
know
that
it's
the
last,
it's
the
point
for
the
branch
and
then
I
can
see
the
differences
between
this.
A
B
A
Even
it's
even
better
from
a
performance
perspective.
If
you
do
it
like
this,
because
each
commit,
even
if
you
generate
100
mrs
based
on
on
this
base,
commit
you
only
generate
a
dump
once
because
they
commit
you
cannot
change
a
commit
to
something
else
like
add
a
new
file
and
demand.
A
So
it's
really
unique
and
it
saves
you
a
lot
of
performance.
If
you
develop
fast
and
have
many
much
requests,
the
the
base
dump
is
only
done
once
and.
B
C
A
We
don't
have
any
fallback
keys,
we
just
have
one
cash.
The
only
thing
when
I
think
we
need
to
ensure
is
do
we
need
the
cash
in
the
second
job
as
well.
A
No,
I
think
it
should
work,
I
think,
for
the
key.
We
need
to
like
double
quote
it
as
a
string,
at
least
it
should
be
safe.
Then.
B
A
C
But
also
at
the
path
and
so
on.
B
B
B
B
C
C
B
A
It
should
work
because,
right
now
you
don't
have
two
competing
virtual
requests:
yeah.
B
A
For
looking
up
whether
this
commit
is
part
of
a
merchant
quest
now
and
if
this
event
happened
and
things
around
this,
so
I
think
it's
in
it's
event
based
somehow
so.
Immersionquest
triggers
an
event
and
you
end
up
in
a
predefined
scope
where
you
have
additional
variables
available
in
the
environment,
because
I
think,
if
you,
if
you
enable
that
globally,
it
would
slow
down
the
entire
pipeline
because
you
need
to
like
collect
all
the
stats,
all
the
variables
all
the
values
and
in
the
end
you
learn
hey.
A
A
You
either
have
merch
requests
or
you
have
the
commit
id
and
yeah.
I
think
that's
the
reason
if
it
doesn't
make
sense
and
someone
else
says:
hey,
it's
super
easy
to
fix
or
to
implement
in
a
more
performant
way.
Do
a
big
grab
in
the
source
code.
C
A
B
It
tells
me
because
we
cached
this,
that
okay,
that's
the
reason
why
this
happens.
So
is
there
a
way
to
to
clean
the
cache,
because
now.
C
B
I
have
just
do
it
my
way,
my
own
way
so.
B
This
change
fix,
you
can.
B
C
B
B
A
We
have,
we
have
used
it
for
c
plus
for
caching.
A
B
A
The
thing
is,
if
you
have
lots
of
files-
and
this
is
typically
happening
happening
with
node.js-
it
could
be
that
the
cache
writing
and
the
cache
reading
is
slow.
There
is
an
open
issue
to
to
improve
that
and
I
think
there
are
many
comments
in
it,
but
yet
it's
an
I
o
challenge
and
to
find
ways
to
like
how
to
deal
with
ten
thousand
hundred
thousand
files
in
a
cache.
It's
it's
not.
It's
not
super
convenient.
C
Yeah,
but
in
the
last
release
there
was
an
optimization
on
that,
because
what
they're
currently
using
for
vacation
they're
using
a
zip
algorithm
that
it's
not
so
fast
because
in
my
company
we
have
node.js
and
some
of
these
weird
that
you
need
to
crash
ten
thousand
fights
or
something
like
that,
because
everything
will
be
downloaded
and
for
that
there's
a
new
feature.
It's
called
fast,
zip,
mostly
and
where
you
can
yeah
it's
currently,
it's
you
can.
Currently,
you
can
use
it
by
feature
flag.
A
A
But
if
you
for
example,
say
hey
I'm
using
the
free
version
with,
I
think
it's
400
ci
minutes
in
a
month
you
can
easily
like
try
to
optimize
that
and
say:
hey
shop
doesn't
need
to
run
10
minutes
five
minutes,
but
we
can
like
make
it
push
it
down
to
one
minute
and
the
other
thing
to
to
to
think
of
is,
for
example,
you
have
gitlab
runners
in
aws
or
you
use
a
cloud
resource,
and
then
you
get
the
bill
and
say
yeah.
A
The
jobs
are
running
out
of
minutes
and
you're
paying
ten
thousand
dollars
now,
just
because
the
ci
pipe
lenses
are
not
optimized.
B
Yeah,
so
we
use
simic
a
lot
and
then,
as
you
mentioned
this
this,
this
simic
cc
cache.
We
don't
use
cc
cache
because
in
windows
there
is
no
easy
or
I
don't
know
that
it's
working
there.
But
the
caching
thing
is
when
you
have,
for
example,
you
have
your
binaries
compiled
and
you
have
your
performance
wire
or
something
like
that.
You
can
share
this
with
multiple
jobs
and
get
parallelism
for
free.
So
if
your
tests,
like
us,
run
20
30
minutes
or
something
like
that,
so
now
we
can't
access
the
build
old.
B
C
C
Yeah,
mostly
an
artifact
is
typically
on
the.
In
terms
of
definition,
an
artifact
will
be
used
for
passing
the
result
between
stages
and.
B
C
Caching
is
suddenly
used,
mostly
if
you
need
to,
instead
of
caching,
is
for
your
dependencies,
mostly.
A
Yeah,
I
was
thinking
of
caching
the
the
base
commit
for
the
match
request.
This
should
be
cached
the
the
build
job
which
generates
the
commit.
This
doesn't
need
to
be
cached,
because
this
should
always
be
generated.
B
Because
for
me
it's
interesting
so
every
time
I
write
to
the
cache
I
I
assume
that
the
cache
is
and
and
and
bucket,
which
is
obviously
our
container.
So
every
time
I
write
to
the
cache
is
this
a
new
container?
If
you
just
uploaded,
I
guess
that
it
merged
the
results
together,
or
is
it
just
in
in
in
one
run
or
something
like
that.
A
If
you
don't
clean
the
cache,
I
think
it
stays
there
forever
and
if
you
like,
rename
something
in
your
ci
job
and
and
output
files,
the
old
thing
could
be
lying
around.
So
when
you,
when
you
see
strange
behavior
in
sci
pipeline,
it's
better
to
clear
the
cache
beforehand,.
B
B
Because
I
think
about,
I
think
we
could
skip
this
as
a
whole,
not
the
check
api,
the
the
build
old
stuff,
because
when
we,
when
this
works
this
way,
as
you
mentioned
it,
then
I
would
copy
this
into
a
directory
with
this
hash
stuff.
A
I
think
it's
it's
hard
to
understand,
then
so,
right
now
it's
like
it's.
It's.
A
Yeah,
that's
true.
Probably
you
can
optimize
it
with
like
some
extensor
templates.
I
think
it's
from
a
from
a
development
perspective.
It's
really
hard
to
go
there
and
say
hey.
You
have
either
this
state
or
this
state
for
the
job,
because
we
came
here
from
this
long
flow
chart.
I
think,
for
debugging
and
for
for
getting
things
going.
A
Two
jobs
are
not
that
bad,
especially
because
you
can
define
the
artifacts,
they
don't
hurt
if
you
let
them
expire
in
one
week
or
something,
and
you
can
like
download
the
artifacts
and
do
the
stuff
yourself
or
if
you
are
like
developer
and
just
don't
trust
the
ci
system
for
some
reason
or
you
just
want
to
reproduce
it
on
your
own
system.
You
can
just
download
that.
B
A
Api
yeah,
but
the
problem
is,
then:
you
need
to
write
curl
in
your
ci
jobs
and
while
you
can
do
it
and
reuse
certain
variables
for
the
authentication,
I
think
it's
it's
not
not
the
preferred
way.
I
would
it's.
A
Yeah,
you
can,
you
can
go
with,
go
there
and
abstract
the
boy
and
say
hey,
I'm
relying
on
the
artifacts.
The
problem
is
if,
for
example,
and
the
policy
for
exploration
is,
is
there
the
artifact
might
be
gone
and
you
still
need
to
regenerate
it
so
depends
so.
B
B
The
pipeline
seems
to
work
so
I
have
something
pushed
here.
So
what
I
do
now
is
I
change.
I
make
an
api
break,
so
I
just
use
this
class
and
the
just
make
into
double.
I
hope
the
api
detector
will
detect
this
change.
I
never
tried
it
myself,
so
be
a
good
c
plus
plus
programmer
until
the
compile
it
is
a
double,
and
that
way
it
should
be
the
same,
but
it
should
break
the
api.
B
It
should
be
great-
let's,
let's
switch
from
ins
to
double
and
in
theory
this
should
detect
stuff
and
gitlab
metric
matrix
metric
yeah.
The
metrics
reboot,
which
I
mentioned
before
this
is
cool.
A
What
I
would
do
is,
for
example,
also
like
make
a
comment,
while
the
api,
or
just
like,
use
some
different
methodology,
which
which
can
can
become
handy
here.
A
B
A
I
think
there
are
like
different
ways
to
roam
again.
You
can
make
it
work,
it's.
It
might
not
be
as
beautiful
as
it
might
be.
Look
if
you're,
for
example,
using
a
metric
report
with
premium,
but
in
the
end
you
probably
want
to
see
a
table
and
the
table
is.
A
B
A
B
It
should
it
shouldn't
stop
here,
because
it
it
process
it.
Then
it
tells
me,
let's
upload
the
artifact
for
a
field-
job,
okay,
yeah,
so
how
to
the
best
way
to
to
ignore
the
return
code
in
bash.
A
B
A
You
can
you
should
define
a
variable
where
you
capture
all
errors,
so,
like
every
line
is
captured
in
an
error
state
and
then
in
the
end,
you
evaluate
whether
the
error
is
different
to
zero
or
just
exit.
As
as
the
the
error
code-
and
this
is
the
this-
is
the
specific
scope
where
I
would
write
a
bash
script.
C
C
A
Yeah,
I
think
one
of
the
requests
lately-
and
this
is
a
total
legend
request-
was
to
use
for
loops
because
you
could
generate
chops
out
of
that.
I
would
totally
love
that,
but
I
don't
see
fit
in
the
configuration
language
if
you
want
to
go
by
that,
you
need
to
like
invent
your
own
domain
specific
language,
but.
C
A
B
So,
as
you
can
see,
we
found
the
I
think
I
have
no
highlighting
at
all
in
in
the
stuff,
but
I
I
would
guess
I
can
just
use
normal
html
stuff
to
highlight
this
stuff,
because
yeah.
A
Okay,
perfect,
so
actually
can't
just
use
this
alex
project.
C
We've
only
declared
with
only
one
for
loop,
because
this
is
a
repository
that
I
found
really
interesting
using
four
generating
this
exporting
this
as
an
artifact
and
then
later
in
each
drop.
The
artifact
will
be
included.
A
B
So
it's
the
same
strategy
which
is
building,
but
it's
not
a
yaml
direct
feature.
It's
a
github
pci
feature.
B
A
A
It's
a
cool
way
of
doing
it.
If
you
are
like
bound
to,
maybe
you
should
write
a
blog
post
about
this
follow-up.
It
looks
interesting
if
you
just
write
a
blog
post,
please,
let's.
C
Yeah,
this
is
a
problem
because
you
can't
have
dynamic
stages
in
the
club.
C
Yeah,
I
think
this
awesome
guy
that
he's
doing
some
monitoring
or
linux
stuff
repository
is
very
well
written
at
everything.
So.
A
It
also
uses
an
audio
sql,
which
is
like
an
old
ui
for
configuration
stuff.
A
A
One
thing
I
wanted
to
share
is:
I'm
also
copy
pasting
it
into
the
zoom
chat.
This
was
an
example
of
how
I
singer
uses
c
cache,
basically
just
like
defining
path
for
c
cache.
Configuring
c
cache
that
it
only
writes
the
specific
local
project
directory.
A
This
should
be
hidden
somewhere
else,
and
then
you
like
copy
over
the
c
cache
into
the
the
actual
cache,
oh
okay
and
this
space.
So
when
you
so
see,
I
think
the
default
for
cache
is
two
gigabytes.
You
can
extend
that
by
configuration
with
some
environment
variables.
I
think,
and
the
whole
thing
is
being
done
in
the
in
a
docker-based
image
which
has
all
the
build
tools
and
cache
pre-installed
and
if
you
kind
of
extend
c
cash
to,
I
don't
know
eight
gigabytes
or
something
like
that.
A
You
can
also
build
big
projects
which
might
generate
or
exceed
c
cash
all
the
time,
and
I
think,
when
I
started
with
c
plus
plus,
I
was
looking
at
the
compilation
procedure
for
20
minutes
and
every
time
when
I
changed
something
and
with
c
cache,
you
only
like
c
cache
detects
when
you
touch
the
file
or
modify
the
file
and
only
compiles
the
the
object
file,
which
would
be
generated
out
of
that
from
the
lip
from
a
specific
shared
library
or
something
which
is
defined
by
the
cmak
file.
A
And
I
think,
when
I
didn't
change
much
in
in
the
system,
except
for
a
header
file
which
is
included
everywhere.
The
build
time
was
like
one
minute.
Two
minutes:
oh
because
linking
linking
is
a
different
story
linking
takes
longer
than
yeah,
because
you
need
to
like
link
all
the
objects
files
into
an
executable
binary
afterwards.
A
But
there
are
also
tricks
to
optimize
that
different
idea.
One
idea
which
we
actually
did
was
we
kind
of
merged
the
source
code
of
of
all
files
in
a
library
directory
into
a
single
big,
huge
cpp
file,
which
then
was
compiled
and
generated.
Just
one
object
file,
and
this
was
then
linked
together.
So
instead
of
linking
500
object
files
together,
we
were
just
linking
five,
which
is
faster.
A
It
consumes
more
memory,
so
the
windows
box
might
run
out
of
memory
then,
but
in
the
end,
if
you
compile
lots
of
source
code,
you
need
to
find
optimizations
yeah.
B
Definitely
so
if
you,
if
you
notice,
hopefully
I
find
it
now,
there
is
the
h
hacker.
Oh
god,
it's
not
direct
for.
A
A
A
B
All
this
compiling,
so
he
he
also.
He
also
compare
everything
and
it
depends.
But
if
you
have
a
modern
tool
chain,
unity
builds
is
awesome,
as
mentioned
that
you
put
multiple
files
into
it
and
you
get
you
need
more
ram,
but
but
the
the
the
speed
up
is
really
big
and
he
mentioned
every
tool
for
fast
build
for
from
for
windows,
that's
more
or
less
the
cc
cache
for
for
the
res
other
systems
and
yeah
inline
notations,
and
what?
What
not?
It's?
A
A
One
thing
we
encountered
this
previously:
everything
was
a
library
loader
and
you
would
load
like
shared
libraries
into
memory
on
demand
and
if
you
don't
use
a
my
sequel
back
back
end,
you
don't
load
the
library
on
startup
and
then
there
was
a
problem
that
the
object
lookup
between
the
shared
library
files
in
memory
was
fragmented
or
was
taking
too
long.
Because
you
kind
of
you
executed
something
in
lib
remote.
Then
it
called
a
function
in
the
base
and
the
function.
A
The
base
called
something
in
lib
cli
and
then
it
was
jumping
back
and
back
again.
So,
like
the
symbol,
look
up
to
quite
a
long
time,
we
solved
that
by
creating
a
static
binary
which
is
linked
together.
It's
not
super
amazing
from
the
deployment
thing
and
also
not
for
debugging
and
finding
the
right
address,
because
the
release
build
differs
from
from
what
you
get
in
your
debug
build
then,
but
still
it
was
faster,
and
this
was
for
performance
reasons
at
runtime
later
on.
A
It
made
the
build
process
yeah
more
complicated,
but
in
the
end
it
was
worth
it
because
the
customer
didn't
didn't
say
that
it
was
so
slow.
So.
C
B
A
technique
that
we
we
use,
shared
libraries
which
have
only
interface
or
abstract
classes
and
just
forward
every
class,
because
most
of
us
is
point
as
a
references,
and
this
way
you
can
really
easily
break
the
dependency
hell
from
the
implementation.
A
A
One
thing
guna,
and
I
did
I
think
it
was
in
2013
or
something
like
that.
So
with
java
you,
you
automatically
have
together
and
setup
functions
for
class
members
over
class
functions
now
now
for
class
members
actually
and
in
c
plus
plus.
You
need
to
write
together
in
the
setters
by
yourself
and
since
we
had
objects
with
like
100
attributes
or
100
member
member
attributes,
it
was
like
yeah.
Can
we
generate
that
somehow?
A
Yeah,
probably
we
need
to
generate
the
plus
plus
code
and
since
we
kind
of
invented
a
configuration
dsl
for
for
I
singer,
2
back,
we
thought
well.
A
We
could
use
the
same
definition
with
flex
and
bison
like
passing
something
into
into
object
code
and
we
created
so
so
so-called
ti
files
where,
like
the
class,
was
defined
in
an
abstract
language
with
all
the
attributes,
and
then
it
said
well,
it's
a
state
attribute
and
it
got
a
specific
different
flag
or
handling
or
something
else
like
a
meta
language
and
this
generated
c,
plus
plus
code.
A
So
before
the
entire
compilation
happened,
the
class
compiler
was
actually
generating
the
class
c
plus
plus
code,
and
then
everything
was
merged.
Together
now
explain
that
to
someone
who
is
new
to
a
singer,
but
a
professional
c
plus
plus
developer
yeah,
or
maybe
like
starting
with
c
plus
plus
having
read
a
book
about
c
plus,
plus
and
best
practices,
and
then
oh
there's
a
class
compiler
yeah.
What's
that
that
is
not
c
plus
plus
yeah.
But
it's
like
lazy
lazy
developers.
B
A
For
us
it
was
more
or
less
like
if
I
want
to,
for
example,
so
it's
support
for
tls.
As
a
configuration
attribute
you
need
to
like
define,
enable
tls
as
a
boolean,
then
you
have
a
string
for
to
find
a
certificate
and
whatever-
and
I
said
yeah,
we
said
to
our
contributors
here-
look
at
the
existing
code-
it's
just
there
yeah,
but
this
ti
thing
do
I
really
need
to
like
add
one
line
and
then
everything
works,
and
then
I
call
the
getter
in
the
in
the
funk
in
the
in
the
workflow.
A
So
yeah
just
try
it
out
yeah,
but
I
don't
believe
that
yeah
just
try
it
out.
It
works.
It
saves
us
time
to
look
at
compiling
and
do
something
else,
but
yeah
in
the
end.
I
think
it
saved
us
multiple
hours
and
also
removed
errors
from
did
you
define
this
scatter,
etc
yeah,
but
the
api
is
not
compatible
or
something
else
broke.
The
only
problem
was
that
the
class
compiler
code
was
actually
actually
really
ugly.
B
B
A
Yeah,
I
don't
have
the
code,
I
don't
have
something
to
show
you,
but
just
imagine
that,
for
example,
maybe
you
know
boost
signals
when,
when
someone
says
on
active
changed
like
calling
the
function
or
like
changing
the
attribute,
you
want
to
trigger
a
function
in
the
in
the
back
end
like
someone
creates
a
new
value
and
you
want
to
subscribe.
A
This
is
with
a
round
robin
slots
subscribe
to
that
event
and
write
it
to
graphite
or
to
influx
dp
or
something
else,
and
we
kind
of
used
it
everywhere.
So
anytime,
an
object
attribute
was
implemented.
It
automatically
generated
these
boost
signal
events
in
the
background
and
specific
other
things
like
activate
deactivate
things
around
this,
and
I
think
the
whole
cluster
is
active
and
load
balancing
system
actually
depends
on
that.
A
So
it's
it's
super
complicated
and
super
complicated
to
explain
the
only
thing
is
it
generated
everything
in
the
background
and
there's
a
fresh
contribute
to
the
project.
You
only
needed
to
know
that
there
is
like
abstraction
lots
of
extract
abstraction
and
you
just
copy
paste
from
what's
there
right.
This
is
what
I
what
I
would
call
myself
in
the
end.
I'm
a
copy
copy-paste
developer,
I'm
using
best
practices.
A
B
A
And
the
thing
is,
they
can
always
refactor
it
and
they
will
probably
do
it.
If
I
see
code
which,
where
I
think
well,
maybe
we
shouldn't
use
go
to
in
that
regard
or
something
else
you
could
refactor
the
code,
make
it
more
like
error
resilient
and
then
it's
just
like
someone
else
approves
and
they
merge
it
and
then
okay
and
they
generate
new
bugs.
Probably,
but
that's
not
my
game
anymore,.
B
So
we
have
a
acute
code
base
and
we
use
python
qt,
it's
not
the
most
popular
library
in
the
world,
but
it's
also
generating
code
for
for
python
wrappers.
With
the
cute
object
model,
then
you
have
an
embedded,
an
embedded
system
and
the
abstract
away
the
stuff.
So
you
can
write
xml
files
to
inject
code
or
tell
the
system
how
we
should
transpile
it.
B
And
then
you
get
a
lot
of
c
blast
bus
codes
which
you
build
then,
and
then
you
have
an
integrated
python
script,
our
embedded
python
integrator
with
your
cute
code.
So
it's
the
same
thing,
but
we
just
use
it
for
off
the
shelf.
So
building
their
own
dsl
is
so,
do
you
use
any
specific
technique,
or
do
you
really
make
it
completely
new
detail.
C
Okay,
I
need
to
take
my
chance.
I
will
drop
off
we'll
see
us
next
week.
Probably
I
need
to
grow
bye,
bye,
bye,.
A
Bye,
I
think
it
was
just
using
what
was
it
using
the
tools.
A
The
class
lex
and
the
glass
parts,
let
me
just
get
my
screen
now.
A
A
Flex,
python,
so
it
basically
like
passes
and
tokenizes
the
whole
thing
where
it
doesn't
know
what
what
it
actually
needs
to
do
and.
A
Yeah
different
statements
and
it's
a
sim,
it's
a
simplified
version,
basically
of
of
the
actual
config
compiler,
which
is
in
lib
config,
and
there
is
the
config
parser,
the
config
lexa,
because
this
is
like
lots
of
things
in
the
dsl
which
which
are
defined
so.
A
It
was
not
there
in
the
beginning,
it
was
not
that
big,
but
you
like
from
like
defining
the
scope
and
the
workflow
from
an
if
condition
you
go
over
to
like
wild
while
loop
and
functions
and
everything
crazy
things
in
there.
So
we
kind
of
reinvented
python
javascript
php,
somehow
into
our
own
dsl,
and
it
was
fun.
A
Well
yeah,
this
is
basically
do
you
think
that
dsl
looks
like
where
is
it
come
on
plugins,
for
example,
it's
like
a
template
and
an
object
and
and
attributes
it's
similar,
how
how
nazis
configuration
looks
like,
but
it's
in
the
whole
new
format
making
it
making
it
like.
You
pass
it
and
you
you
can
use
macros
and
things
like
the
macros
with
the
double
dollar
signs.
The
thing
is:
oh,
we
could
use
hangman.
A
One
thing
we
did-
and
this
was
I
think
the
year
when
we
implemented
it
so
kunda
implemented
it.
There
was
an
irc
port
and
the
rsc
port
was
written
with
a
tickle
script
and
within
the
zener
dsl
you
can
program,
you
can
define
functions,
you
can
like
define
it
in
a
specific
scope.
So
is
the
scope
and
everything
inside
lifts
there.
A
Needs
a
global
variable
to,
of
course,
and
then
you
can
do
things
and
one
of
the
ideas
to
showcase
this
was
there
is
a
console
like
the
the
python
rapper
and
ruby
and
things
you
type
something,
and
you
get
feedback
and
you
can
play
hangman
in
the
senior
two
console.
Hangman,
output,
nine,
okay
and
the
fun
thing
was
that
there
also
was
the
possibility
to
wrap
this
somehow
in
a
tickle
script
and
the
roc
board
was
responding
to
it.
A
So
you,
in
the
background,
it
was
talking
to
a
singer
somehow,
and
it
was
fun.
Gunner
is
really
mastermind
what
what?
What
was
possible
inside
an
example
for
the
ti
files-
and
this
is
the
interesting
part
could
be.
This-
is
the
implementation
for
the
mysql
backend.
A
This
is
basically
how
it
looked
like,
so
it
was
somehow
the
plus
plus
code
like
we're,
using
and
and
also
interrupting
things,
but
then
it
generates
different
things.
So
activation
priority
is
like
configuration
order,
activate
object,
starting
things
because
it
influences
some
things.
It's
a
weird.
B
A
We
started
adopting
the
c
plus
11
standard
in
2016
2017,
because
the
compilers
on
centers
five
now
center
s5,
was
dead.
When
centers
5
was
dead,
we
could
actually
use
certain
things,
but
yet
more
or
less.
This
is
all
config.
So
this
like
defining
this
is
a
configuration
attribute,
but,
for
instance,
there
is
the
single
application
class.
This
also
has
states
for
things
and,
as
you
can
see,
there
are
many
ziploc
cpp
files.
This
is
also
an
object,
and
this
has
oh,
this
hasn't
stayed
there.
A
No
a
host
host
object
has
a
state
yeah,
for
example,
a
host
can
be
up
and
down,
and
this
is
a
timestamp
which
is
stored
as
a
state,
and
this
kind
of
added
a
boolean
get
a
boolean
flag
marking
this,
as
we
need
to
save
this
over
restarts.
So
there
is
like
a
cache
file
which
is
written.
If
you
shut
down
the
application,
this
file
gets
written
and
and
the
next
startup
you
read
from
the
cache
and
know
whether
the
host
is
up
or
down.
B
So
it's
it's
a
meet,
it's
something
c-sharp.
A
And
I
don't
really
remember
what
to
get
it.
Was
it
to
specify
or
to
deny
it
I
don't
know,
but
you
could
actually
like
write
inline
getters
in
there,
which
means
the
display
name.
If
it's,
if
it's
empty,
so
we
created
our
own
string
class,
we
would
use
the
object
name
and
otherwise
we
would
use
the
display
name.
A
Okay,
it's
I've
been
working
on
the
singer,
2
from
2012
to
2020.
A
B
But
I
can
totally
understand
that
you
build
an
old
dsl
because
it
makes
this
thing
more
maintainable.
It's
it's
and
the
problem
see
blasphemous
can
do
everything,
but
it's
like
with
any
language.
You
can
shoot
in
your
foot
really
fast,
so
I
think
yeah,
it's
it's
it's
harder
for
for
beginners,
which
start
with
the
code
base,
especially
when
you
have
yeah
one
contribution
from
one
person
in
three
years
or
something
like
that.
Then
it's
much
harder
but
for
your
core
maintainers,
I
would
guess
it's
much
easier.
A
Yeah
this
shouldn't.
One
thing
I
wanted
to
share
is-
and
this
is
like
the
last
for
the
stream
right
now-
oops-
so
I've
tweeted
about
this-
so.
A
Actually
should
should
be
pinked
about
it,
so
they
can
try
it
out
and
everyone
else
yeah,
and
so
I
would.
I
would
like
to
say
thanks
for
preparing
everything
and
thanks
for
the
time
and
let's
see
what
beautiful
topics
we
will
find
out
next
week,
so
bye
on
youtube,
bye.