►
From YouTube: May2020 Runner office hour
Description
Recording from the Runner office hour during the Q2'2020 Hackathon
A
A
A
This
is
obviously
perfectly
timed
and
really
intentionally
with
the
current
hack
of
all
that's
going
on
to
get
lunch,
and
so,
if
anyone's
got
a
MRI,
they've
started
or
thinking
about
making
a
contribution
to
the
runner
be
more
than
happy
to
talk
about
it.
Get
some
Direction
get
some
feedback.
We've
got
live
and
barring
that
what
we'll
do
is
kind
of
just
talk
about
some
things
that
are
probably
helpful.
This
anyone
going
through
that
process
and
going
through
that
or.
A
B
B
A
Yeah
and
I
think
today
the
plan
to
start
off
is
actually
to
get
Tomas
who
were
senior
engineers
on
the
team
to
give
a
bit
not
actually
a
review,
but
actually
just
kind
of
talk
us
through
a
bit
of
a
walkthrough
of
like
the
life
cycle
from
us,
how
the
runner
sees
a
CI
job,
any
questions,
anything
else,
maybe,
and
that's
going
over
I,
don't
actually
have
a
script
and
link.
So
I'd
probably
do
this
a
little
bit
differently
every
time,
nothing!
A
C
Sure,
okay,
so
let
me
share
my
screen
and
we
will
do
a
little
walk
through
through
the
sources
of
Hitler
Prada
and
see
how
the
CI
job
is
being
handled
by
the
rebel,
because,
after
all,
this
is
what
is
the
main
purpose
of
the
garage
execute
over
for
jobs,
so
the
runner
as
a
process
when
when
it
started,
we
we
know
that
it
that
it
may
have
different
executors
that
execute
the
job
in
different
in
different
ways
using
different
technologies.
But
there
is
a
common
part
for
any
kind
of
Runner
deployment
that
we
will
have.
C
This
is
the
main
process
that
goes
through
all
runners,
sections
from
the
convict
on
file
and
asks
for
jobs
for
each
configured,
gitlab
connection
and
then,
if
a
job
is
received,
we
start
we
start
processing
it.
So
our
main
entry
point
is
the
is
the
function
that
we
see
here.
It's
there,
it's
the
method
that
is
started
when
you
execute
it
lab
Runner,
run
command
or
when
you
start
the
get
a
problem
process
through
a
system
manager
because
system
integration
that
we
have
under
the
hood
also
uses
the
run
column.
C
So
after
after
doing
some
initialization
like
starting
up
the
matrix
server,
if
it's
start,
if
it's
configured
start
it
up
the
session
server
and
few
few
other
things,
we
add,
this
line
start
the
girl
routine.
Let's,
let's
go
there.
This
is
the
main
go
routine
that
triggers
job
requests.
This
is
where
the
check
interval
from
conflict
on
file
is
being
used
and
being
interpreted.
So
what's
most
important
here
is
that
we
check
how
many
runners
sections
I
mean.
C
This
one's
from
the
conflict
on
file
we
have
defined
for
each
I
think
each
every
one
minute
runner
checks.
If
the
config
file
was
changed,
if
it
was
changed,
then
we
reload
it
and
then
we
start
tracking,
also
the
new
entry,
so
we
stop
tracking
the
hole
that
we
removed
anyway.
At
this
moment,
we
know
that
we
have
several.
C
C
C
However,
if
we
have
more
than
one
runners
entries
defined,
if
we
have
more
than
one
runners
register
from
one
configuration
file,
then
the
overall
number
of
requests
will
be
bigger
than
one
could
think
about
like.
If
we
have
the
three
example
runners
but
I
said,
if
we
define
check
interval
for
us
nine
seconds,
then
between
two
subsequent
requests
for
project,
a
the
runner
that
is
registered
for
project
a,
we
will
get
this
nine
seconds
waiting
time.
C
However,
we
will
generate
three
different
requests
in
this
nine
seconds,
each
one
after
after
three
seconds,
because
this
is
what's
what's
taken
here
and
then
knowing
what
is
the?
What
is
the
the
slipping
time?
What
is
the
the
pulse
between
generating
requests?
We
we
use
the
the
method
named
fit
runner
to.
C
To
go
to
go
forward,
so,
let's,
let's
go
and
quickly
see.
There
is
nothing
special
happening
here,
except
that
we
check.
If
the
runner
is
healthy.
My
runner
is
happy.
We
mean
that
if
we
were
receiving
errors
of
natural
communication
for
a
few
subsequent
requests,
then
we
mark
a
runner
as
unhealthy,
and
then
we
stop
asking
such
little,
of
instance,
for
new
jobs
for
some
time.
I,
don't
remember
now
how
long
time
it
is
we
would
need
to
bake
in
one
of
the
files.
C
C
This
is
a
girl
routine
that
waits
for
a
specific
runner
entry
to
be
triggered
to
go
forward
and
not
focusing
much
on
what's
happening
here
at
this
moment,
let's
go
to
the
process,
runner
method,
and
here
the
magic
begins.
First,
we
we
check
on
the
executor
provider.
If
we
even
are
able
to
do
anything,
we
will
not
focus
on
this
right
now,
maybe
maybe
at
the
end
there
will
be
a
little
time
to
say
what
is
the
difference
of
requiring
release
between,
let's
say,
doctor
or
executor
and
show
the
executor.
C
C
Senex
us
against
the
configured
limit,
so
we
have.
We
have
the
general
level
concurrency
setting
which
defines
the
the
maximum
number
of
concurrent
jobs
that
will
be
handled
by
the
runner,
no
matter
from
which
rather
centuries
coming
home,
but
then
for
each
of
the
entries
we
can
define
a
specific
limit,
and-
and
this
is
the
place
where
this
is
checked,
so
we
check
if
the
specific
runner
entry
that
we
now
try
to
handle
is
is
allowed
to
execute
any
any
new
job.
Let's
say
that
we
can,
we
can
go
ahead.
We
are
still
in
limit.
C
C
Finally,
triggers
the
the
HTTP
request
to
get
lab
API
to
get
the
job,
so
we
already,
we
already
passed
several
checks
and
we
still
don't
even
know
if
there
is
a
job
waiting
for
us
first,
we
are
checking
yet
again.
If
another
limit,
we
have
the
request,
concurrency
I,
think
because
concurrency
setting
on
the
runners
section,
which
is
which
is
checked
at
this
at
this
place.
If
again,
we
are
still
in
the
limit.
C
If
we
are
allowed
to
generate
another
request
to
the
key
club
give
up
API,
we
call
the
network
request
job
method,
and
this
is
finally
the
moment
when
we
talk
with
the
club
and
then
the
magic
happens
on
the
key
table
set,
get
lab
checks.
What
kind
of
runner
is
asking
for
a
job
if,
if
the
runner
receiving
outraced
for
this,
if
the
token
is
matching
gitlab,
makes
the
tax
matching
up
this
moment?
So
it
sees
what
runner
asks
for
a
job.
C
It
has
the
list
of
jobs
that
could
be
available
for
such
rather
and
that's
such
such
magic.
We're
checking
the
tax
checking
the
CIA
minutes,
for
example,
if
this
is
the
Shadrin
github.com
and
hopefully
at
this
moment
we
get
the
job
payload.
With
all
of
the
information
about
the
job
that
we
are
interested
in
saying
about
this,
we
can
go
to
the.
C
Response
definition,
if
someone
is
interested
how
the
job
in
casting
API
payload
is
defined,
then
the
common
slash
network
go
file
and
this
structure
is
the
starting
point.
This
structure
and
all
of
the
all
the
structures
that
it's
that
it's
information
and
composes
the
field
defines
us
the
full
information
about
the
job
that
we
get
rather
is.
C
Runner
is
not
aware
of
anything
outside
of
what
it
get
here.
Runner
doesn't
work
in
a
pipeline.
Complex
runner
doesn't
work
in
a
project
context.
It
doesn't
know
anything
about
about
the
the
specific
pipeline
settings
that
you
may
be
set
in.
The
github
CIN
file
rather
cares
only
about
the
specific
payload
of
a
specific
job
that
was
recited.
So
if
something
is
here,
we
can
handle
it
in
any
way.
If
something
is
not
here,
then
we
will
not
know
such
things.
C
C
C
In
each
other,
the
the
final
think
that
will
happen
with
the
job
so
checking
if
we
even
have
any
errors,
but
that
happened,
but
this
will
be
the
final
step
of
our
job
life
time.
So,
let's
skip
it
for
a
moment,
and
here
here's
some
things
that
we
do
at
the
background.
So
having
the
the
job
data
payload,
we
create
a
common
built
object.
We
assign
few
things
to
eat.
This
is
a
place
where
we
update
some
Prometheus
metrics
that
you
can
export
from
the
runner.
C
In-Built
ram,
the
most
important
call
is
is
in
fact
here
because
all
of
this
is
still
a
preparation
setting
some
contacts
defining
the
build
logger.
That
is
way
how
we,
how
we,
let's
say,
multiplex
the
mock
messages
so
that
they
can
be
saved
in
a
planned
process
log,
but
also
sent
to
the
job
trace.
But
at
this
moment
we
have,
we
have
the
executor
that
we
will
use
to
handle
the
job.
C
We
should
have
it
ready
and
weak
and
we
can
call
the
run
method
to
precede
the
execution,
and
this
place
here
is
where
the
execution
starts.
So,
after
all,
of
the
preparation
in
the
previous
metal
sounds-
and
here
we
start,
the
job
execution
in
a
separate
go
routine,
and
then
this
is
a
place
where
we
wait
for
that
job
to
be
finished.
As
you
can
see
here,
there
are
three
possible
ways
how
the
job
can
be.
C
Job
execution
can
be
interrupted.
First,
maybe
let's,
let's
go
from
the
from
the
reverse
order.
The
last
one,
but
the
the
one
that
is
most
important
for
all
of
us
is
that
the
job
was
finished.
It
could
be
finished
with
a
failure.
It
could
be
finished
with
a
positive
positive
result,
but
at
any
case
here
we
will
get
either
an
error
or
a
'new
representing
that
the
job
finished
successfully,
and
this
can
be
immediately
process
it
and
we
can
start
getting
back
to
to
finalize
the
job
processing.
C
The
second
way
is
the
signal
received
on
the
system
interrupt
structure
field.
This
is
something
that
is
propagated
from
the
multi
that
go,
for
example,
when
you
will
start
the
runner
in
the
foreground
and
you
will
hit
control-c
or
when
you
will
kill
signal
or
the
I'm.
Sorry,
not
the
sector's
in
signal
or
the
secret
signal
several
times
at
some
moment.
C
Runner
decides
that
the
interrupt
signal
was
sent
and
it
starts
propagating
the
signal
down
to
the
to
the
door
it
in
stock,
and
this
is
one
of
these
places.
So
when
we
receive
this
interrupt
signal
from
from
the
user,
we
just
stop
the
job,
and
then
there
is
another
place
where
we
send
the
information
to
the
job
script
execution
itself
to
be
to
be
interrupted
them
to
be
to
be
stopped
in
whatever
context
it
is
executed,
and
the
last
thing
is
the
insta
context,
the
context
that
we
pass
here
to
the
run
method.
C
At
the
moment,
when
the
context
is
finished,
we
also
finished
job
processing,
and
then
we
try
to
interrupt
the
job.
If
we
will
exit
from
this
select,
then
we
send
the
console
to
the
job
in
case.
If
this
was
one
of
these,
two
situations
only
wait
for,
therefore,
the
job
to
be
finished
to
to
just
finally
finally
handle
all
of
the
left
left
steps.
However,
we
set
that
we
start
the
job,
but
we
didn't
see
how
it
is
started.
So,
let's
go
here
and
this
method
describes
in
what
steps,
how
we
name
it.
C
The
job
is
executed
and
what
we
can
see
here.
We
can
see
that
we
have
some
prepare
get
sources.
This
is
the
place
where
we
either
call
get
clone
or
get
fetch.
All
of
the
things
that
are
happening
around
this
is
the
place
whereby
git
LFS
commands
are
executed.
This
is
the
place
where
the
sub
modules
are
handled
and,
since
enough
remember,
Steve
twelve
seven,
twelve
eight
when
we
introduced
the
messaging
in
the
job
trace
each
of
these.
Each
of
these
steps
is
described
in
the
in
the
job
block.
D
C
Think
it
was
twelve,
twelve
five
anyway
little
above
you
can.
You
can
see
how
each
of
these
constants
is
mapped.
The
text
that
you
can
see
in
the
job
job
output
like
right
now,
so
everything
that
happens
after
getting
sources
from
git
repository
and
before
the
next
line
that
that
will
represent
a
step
is
happening
through
this
restore
cache
is
a
place
where
we
try
to
to
restore
the
cache
either
the
local
one
or
download
it
from
the
remote
cache
like
CSS
free
download.
C
Artifacts
is
the
moment
where
we
use
the
job
payload
to
download
all
artifacts
that
were
defined
for
this
job
to
be
download
and,
as
you
can
see
here,
the
the
construction
of
the
error
checking
each
failure
of
each
of
these
steps,
except
of
of
one.
That
I
will
point
in
the
moment
is
something
that
stops
the
job
processing
later.
So
if
we
will
have
an
ear
and
on
the
prepare
step,
then
all
of
this
will
never
happen
and
we
finally
will
go
back
with
the
error
taken
from
the
from
the
prepare
step.
C
This
is
the
place
that
the
users
are
mostly
interested
in,
because
this
is
where
the
before
script
and
script
are
executed.
The
important
thing
to
know
is
that
before
script
and
script,
while
there
are
two
separated
entries
in
the
github
see,
I
am
definition
and
the
p4
script
can
be
also
set
on
a
general
level.
So
we
could
say
that
there
are
three
different
places
in
the
github
see
I
am
where,
where
these
scripts
are
defined
in
the
job
execution,
there
are
just
concatenated
and
executed
together.
C
So
the
before
script
in
the
script
share
the
same
execution
context,
so
anything
that
was
prepared
exported
in
the
before
script.
Anything
that
relies
on
the
shell
context
will
be
also
available
in
the
script,
and
here
we
can
see
the
after
script
execution-
and
this
is
this-
is
one
of
the
things
I'm
sorry
here
we
can
see
the
after
script
execution.
This
is
one
of
the
things
that
that
we
know
is
a
little
confusing
users
like
why
if
I
prepare
a
society
agent
in
my
before
script,
why
I
can
access
him
in
the
after
script?
C
This
is
because,
after
script
is
executed
in
a
totally
separate
context,
why
there
are
two
reasons:
first,
after
script
was
introduced
it
as
a
way
to
do
some
cleanup,
no
matter
if
the
main
build
script,
faith
or
not,
we
can't
put
them
in
one
shell
execution
script,
because
we
fail
the
script
immediately
on
a
detected
common
fail.
So
if
something
would
fail
in
the
before
script
or
in
the
in
the
things
defined
in
the
script,
we
would
never
reach
the
parts
defined
by
the
after
script.
C
We
give
this
after
screenplays
as
as
a
moment
that
will
be
executed
off,
always
no
matter
if
the
script
or
the
p4
script
fate
or
not,
and
we
also
don't
care
about
the
result
of
this,
so
you
can
use
it.
You
can
do
any
cleanup
if,
if
you
want,
but
it
will
not
affect
your
job,
if
it
will
fail
after
finishing
the
script
execution,
we
get
back
to
few
predefined
steps.
C
This
is
where
we
respect
the
when
setting
from
the
artifact
session
and
something
others
quite
recently
uploaded
referees
referees
is
a
nice
feature
of
the
running
we
started
experimenting
with.
This
is
something
that,
for
example,
allows
us
to
request
some
Prometheus
matrix
configured
in
the
in
the
configuration
file
and
upload
them
as
another
type
of
an
artifacts.
C
C
In
that
case,
this
is
the
error
that
we
are
mostly
interested
in.
If
the
job
failed,
we
try
to
upload
artifacts
on
failure
and
artifacts
uploads
also
failed.
Then
the
job
failure
for
us
is
more
important,
so
we
check
it
and
we
send
it
up
to
the
con
stock
at
first.
If
we
had
not
seen
an
error
before
calling
the
artifacts
upload
then
return
when
we
return
anything
that
the
artifacts
upload
ended
with,
which
may
be
an
error
or
maybe
a
success.
C
So
this
is
how
the
steps
are
defined
and
let
let
us
get
a
quick
look
on
the
execute
stage,
what
it
does
so
skipping
all
of
the
logging.
We
check
what
shell
is
defined
for
the
executor
that
we
use.
We
generate
the
shell
script
and
this
shell
script
contains
things
like
setting
up
the
variable
exports
for
the
variables
that
were
defined.
C
The
configuration
that
fails,
the
script
execution
on
the
first
command
fail,
which
is
handled
differently
for
bash,
which
is
coming
differently
for
PowerShell,
for
example.
This
is
where
we
set
the
configuration
that
enables
the
back-trace
output
if
the
CID
back
trace
feature
is
used,
and
this
is
of
course,
where,
at
the
end,
all
of
the
all
of
the
script
lines
that
the
user
defined
are
added.
What
is
important
to
know
is
that,
from
every
each
line
defined
in
the
before
script
script
after
script,
we
in
fact
generate
two
lines
in
the
script.
C
So
after
calling
this,
we
have,
we
have
the
script.
We
prepare
executor
command
structure
that
will
be
next
used
in
a
proper
way
by
different
executors.
We
define
if
this
is
a
predefined
or
not
comment,
so
the
user
script
and
the
after
script
steps
are
not
the
predefined
ones.
This
is
what
the
user
have
control
over
on.
C
Everything
else
is
predefined.
This
is
used,
for
example,
in
the
docker
executor,
where
we
distinguish
if
the
script
should
be
executed
in
the
image
defined
by
the
user
or
in
the
healthcare
image
stubs
that
we
provide
and
we
build
and
execute
the
build
section,
and
this
method
here
executor
wrong,
is
what
passes
they're
prepared
and
ready
command
to
the
executor
to
be
finally
executed
in
the
final
environment
and
what
happens
there
is
another
magic,
probably
for
another
call
from
this
place.
C
Show
us
what
is
the
general
flow
of
using
the
preparing
can
and
using
the
executor,
so
here
we
can
see,
for
example,
the
acquire
that
we
started
story
with.
So
this
tells
the
runner
if
the
executor
at
this
moment
is
able
to
execute
a
drop
or
not,
and
here
we
can
see,
for
example,
the
run
method
which
gets
the
prepared,
executor
comment
and
does
the
execution
that
we
really
care
about.
C
So
at
this
moment
in
this
place,
we
have
the
job
that
now
is
running.
So
if
you
are
using
a
kubernetes
executor,
this
is
the
moment
where
the
pots
are
being
created
and
the
job
is
being
handled
in
the
past.
If
you
are
using
the
v12
ox.
This
is
the
moment
we,
the
runner,
will
start
connecting
with
VirtualBox,
creating
the
virtual
machine
and
then
I've
been
trying
to
connect
with
it
to
execute
the
script
in
the
individual
machine.
C
Whatever
this
execution
will
return,
it
will
be
damn
handled
as
a
job
job
result.
So
if
we
will
get
an
error
here,
then
in
a
moment
I
will
show
at
with
at
which
place
we
will,
we
will
mark
the
job
as
failed
either
as
failed
because
of
the
script
failure
because
of
the
job
timeout
or
failed
because
of
something
that
was
wrong
on
the
ground
level.
C
I
will
say
in
a
moment
how
this
interval
is
defined
until
the
job
is
finished,
and
what
is
happening
in
this
slope
is
the
incremental
update,
so
the
incremental
update
we
first
sent
the
patch
trace
request.
This
is
something
that
sends
only
the
new
part
of
the
job
output
that
was
received
from
the
job
script.
Since
the
last
patch
request
and
in
the
patch
request
succeeded,
we
send
the
touch
job.
C
Site
get
that
the
job
is
still
working,
let's
start
with
the
touch,
because
it's
a
little
shorter,
so
what's
most
important
here
is
this
part.
We
basically
sent
information
that
job
with
ID
that
we
hold
is
in
state
running.
We
send
it
to
get
luck.
What
gitlab
does
at
this
moment
is
updating
the
updated
up
field
of
the
job
object.
This
prevents
the
mechanisms
in
github
from
considering
such
a
job
as
a
stale.
C
C
We
will
wait
before
sending
another
request,
and
if
we
have
the
job
page
opened,
it
will
be
currently
each
three
seconds
if
we
close
the
page
or
if
we
even
didn't
open
it,
because
the
job
was
started
in
the
background
by
let's
say:
ur
kid
Bush
and
we
never
opened
the
job
in
the
in
the
UI,
then
get
lap
wheel,
instruct
runner,
but
this
request
should
be
sent
each
30
seconds.
This
was
a
huge
improvement
that
we
have
made
these
three
releases
before
to
just
show
the
scale
on
github.com.
C
C
So
this
is
this
is
something
that
we
were
very
happy
about
adding
a
few
releases
ago,
and
this
is
what
happens
constantly.
We
watch
the
script
execution
in
the
executor
watch
the
scripted
execution
in
the
proper
way
for
each
of
the
executors
and
for
the
build
logger
that
I
was
showing
properly
in
one
of
the
places
it
connects
to
the
trace
object
and
it
pushes
these
updates
constantly
to
get
up.
So
you
can
see
the
chopper
output
more
or
less
in
a
life
lifetime
updates.
C
C
C
C
Then
we
mark
the
job
as
job
failed
and
we
use
trace
fail
to
propagate
to
this
to
get
luck
properly.
If
we
had
an
error,
but
this
was
not
a
built
error,
but
something
else.
This
is
what
we
consider
to
be
something
in
a
rather
internal
failure
or
system
internal
failure.
So,
for
example,
using
the
show
executor
someone
deleted
bash
from
the
machine
where
the
runner
is
is
running.
C
Saxons
this
entire
army
calling
fail
with
model
failure
fail.
What
does
it
sets
the
final
reason
and
we
have
on
the
runner's
side,
only
three
of
them,
so
the
script
failure,
something
that
is
out
of
our
scope,
because
it
happened
in
the
in
description
righted
by
the
user
job
execution
time
out
when
the
job
script
was
being
executed
for
too
long
time
and
the
runner
system
failing
that,
I
was
mentioning
a
moment
ago
and
with
the
finish
call
runner
tries
to
do
two
things.
C
First,
send
the
final
trace
batch
requests
until
we
have
something
to
send.
If
we
are
still
receiving
something
from
the
job
output,
then
we
will
try
to
to
send
it
unless
until
we
will
get
any
of
the
response
from
github
or
until
it
will
be
finished
and
then
final
status
update
where
we
try
to
send.
C
We
call
with
another
differ
release
they
built,
which,
for
example,
of
this
the
Prometheus
matrix.
We
finally
call
the
provider
release,
which,
in
case
of
some
provider,
some
some
executors
restore
some
of
the
capacity,
and
at
this
moment
the
job
is
no
longer
existing
in
the
get
up.
Rather,
what
happens
next
is
happening
in
get
up.
A
Thanks
for
sharing
that
come
on
I
there's,
actually
a
YouTube
video
I'll
link
to
put
in
the
chat
right
now,
but
I
also
put
it
in
the
description
of
this
once
we
upload
it.
That
kind
of
shows
how
this
all
looks
a
bit
higher
level
from
the
get
website.
If
I
get
up
I'm
going
to
get
a
rails
application
so
kind
of
where
Tomaso's
walkthrough
starts
and
ends
that
conversation
kind
of
goes
over.
So
it's
pretty
cool,
pretty
interesting,
some
good
diagrams.
It's
I
think
it's
more
looking
at
diagrams
that
Lowell
then
and
say.
E
D
Issue
is
Dharana,
the
Piplup
and
self
starts
are
putting
more
different
types
of
artifacts
right
before
it
is
just
to
support
so
files
that
will
zip
them
up
and
upload
them
to
you.
But
then
it
started
supporting
reports
which
is
like
j-unit
reports,
test
reports
like
licensing
reports
and
things
like
that,
and
those
are
TIFF
like
those
are
handled
differently
for
the
runner
and
forked
lap
as
well.
So
like
this
was
a
suggestion
from
Thomas
actually
and
one
of
the
reviews
we
were
having
where
we
need
to
be
more
verbose
and
explicit.
D
What
kind
of
artifact
we're
uploading
if
we
are
uploading
a
reports
chain
and,
for
example,
for
a
normal,
Art
Fair,
and
then
it's
a
she
right.
That's
how
you
pronounce
your
name
yep.
It
says
she
got
perfect.
Thank
you
and
I
don't
know
so
she
contacted
me
like.
Can
we
achieve
this
in
the
current
code
base?
I
took
a
look
at
it
with
what
we
wanted.
D
Initially,
from
the
initial
discussion,
we
wanted
to
just
say:
uploading
artifacts
report
J
image
right,
but
the
runner
does
not
have
the
information
after
birth
J
unit,
and
this
goes
very
well
with
what
Thomas
showed
earlier
right.
The
job
response
so
to
communication
between
Caitlyn
runner
and
cath
lab
is
all
through
Jason,
but
I
keep
that
response
with
the
type
and
the
type
is
basically
with
such
a
unit.
It's
a
lysis
management
and
things
like
that,
but
it
does
not
specify
the
report.
D
So
I
would
say
like
I
said:
okay,
so
the
least
changes
we
can
make
to
make
it
more
clear
to
the
user
and
but
send
more
data
to
ticular
particular
owner
as
just
print
attack.
So,
for
example,
here
there's
just
a
quick
example:
upload
an
artifact
ice
code,
quality
upload,
an
artifact,
a
slices
management
for
example,
or
uploading
our
practice
archive.
D
So
it's
a
very
simple
to
change.
I
already
looked
at
it
and
I
was
going
to
submit
to
you
more
comments.
But
the
first
comment
is
I,
see
that
we
did
the
quotes
here
now.
I
imagined
us
is
because
of
the
example
I
gave
here,
but
go
actually
provides
this
automatically
using
the
percentage
queue
and
it
will
quote
it
automatically.
It's
like
percentage
pass,
but
with
quotes.
D
Like
that,
you
move
all
this
into
a
single
particle.
That
was
one
thing.
I
was
going
to
suggest.
Humor
and
kinda
make
it
so
I
really
appreciate
that
now.
The
next
thing
is
the
if-else
condition
me
personally
I'm,
not
a
big
fan
of
of
Tulsa's
they're,
just
like
two
branches
that
you
have
to
take
care
of,
so
one
I
was
gonna
suggest
to
do
this
having
default
value.
So
let
me
open
up
the
code
Allah
I
like
to
give
back
samples
like
which
differs
we
go
to.
D
D
E
D
Was
already
writing
this
code?
Perfect
so
and
then,
when
I
like
such
as
all
the
commands,
pretty
sure
like
we
do
have
tests
and
we
don't
have
tests
to
to
check
each
string
that
doesn't
make
justice
that
will
end
up
being
a
preview
test,
but
I
always
enjoy
running
through
manual
QA
like.
Let
me
run
this
manually
as
a
user
to
see
it
as
a
user
and
like
sometimes
it
really
helps
because
sometimes
not
in
this
case,
but
sometimes,
for
example,
I
wish.
D
D
D
E
D
D
That's
perfectly
fine,
like
it's
really
nice,
that
he
started
asking
and
the
issues
of
just
opening
up
my
marriage
request,
because
that
that
helps
us
guide
you
in
the
right
direction
and
like
makes
things
a
lot
quicker
for
you
and
for
us
as
well
Tomas.
Do
you
have
any
objections
to
this
change
or
perfect.
D
E
D
A
I
know
we're
pretty
much
yeah
at
a
time.
No
I,
don't
think
about
anything
else.
Thanks
for
the
kind
of
unusual
one
this
month
come
on
what
I
love
about
doing
this
is
that
we
create
this
artifact
time.
We
have
this
everyone,
whoever
needs
to
look
at
it,
can
go
back
and
watch
this
video
later
so,
like
it'll,
be
really
helpful,
but
yeah
good
luck
with
a
hackathon
Thanks.