►
From YouTube: GitLab Kubernetes Agent repository overview
A
Hello,
so
this
is
mikhail,
just
decided
to
record
an
overview
of
the
gitlab
agent
repository
to
help
anyone
who
is
interested
understand
how
things
work
and
hopefully
contribute.
A
So,
let's
start
with
packages,
I
don't
have
a
specific
plan,
but
let's
start
with
packages
so
top
level
packages
and
directories
and
files.
Obviously
this
is
you
know
what
it
is.
A
So
and
let's
talk
later
about
the
other
ones.
So
cmd
is
a
convention
that
is
very
widespread
and
go
sim
distance
for
commands
so
binaries
that
your
repository
produces,
and
we
have
two
binaries
here:
agent
k
and
communication
server
class
and
some
additional
files
here
that
are
shared
by
these
two,
these
two
packages,
then
we
have
internal
and
pkg.
A
So
these
are
conventional
conventionally
named
packages
as
well.
Internal
contains
implementation
details
packages
that
should
not
be
depended
upon
by
anything
else
that
imports
this
project
to
you
know
to
use
some
some
code
and
pkg
is
the
opposite,
which
is
packages
that
you
know
it's.
Okay
to
depend
on
them,
and
perhaps
the
author
of
the
repository
provides
some
guarantees,
semantic
versioning
and
anything
like
that.
Like
stability
guarantees
about
that.
A
So
here
we
have
two
packages
which
are
configuration
for
the
agent,
which
is
what
you
see
in
the
configuration
repository
and
configuration
for
cass,
which
is
a
configuration
file.
That
is
just
the
file
that
the
cast
can
read
to
get
its
configuration
from
both
define
the
structure
using
protobuf
and,
let's,
I
guess
start
with
this.
One.
A
So
the
the
configuration
is
described
using
the
top
level
agent
configuration
message
and
then
it
it
like
drews
down
into
the
lowest
levels
here
with
ids
and
everything
so
name
is
just
a
way
to
specify
the
name
yeah
in
just
one
encoding
for
the
field.
A
That's
that's
nothing
really
interesting
here
and
then
there's
this
one,
which
is
cast
configuration.
As
I
said
also,
I
try
to
maintain
commands
here
as
documentation
and
there's
also
nothing
really
interesting
here.
Maybe
one
thing
is:
we
are
using
a
standard,
well-known
types:
it's
that's
that's
how
they
are
called
standard
type
from
type
repository,
I
guess
of
protobuf.
So
this
is
a
duration
type
and
it
comes
from
standard
place,
magical
import
from
google,
protobuf
and
also
here.
A
Well,
that's,
that's!
That's
it
basically
here.
Well,
let's
go
into
and
it's
integration
test
which
is
not
really
working
at
the
moment.
Just
scaffolding,
basically
for
it
I've
started
but
never
finished,
and
the
doc
is
documentation.
Obviously,
then
build
is
all
various
things
for
for
the
build,
as
the
name
implies
and
deployment
is
deployment
for
the
agent
k.
A
A
A
This
is
for
brazil
as
well,
and
this
is
for
blazer
we'll
talk
about
bazel
in
the
end
of
the
video.
I
think,
let's
focus
on
how
things
work
in
general.
A
So,
as
I
said,
I
guess
we
can
start
from
this
commands
right,
so
we
have
agent,
k
and
cass
and
when
and
the
interaction
is
defined,
so
they
talk
to
each
other
using
jrpc
and
the
interaction
is
defined
is
defined
here
in
this
directory
we
have
a
protofile
which
describes
the
messages
and,
most
importantly,
the
service.
This
is
the
service
that
exposes
and
it's
meant
to
be
used
by
agent
k.
A
That
is
mapped
to
this
agent.
The
mapping
is
stored
in
the
database,
so
the
agent
sends
the
token
using
the
dock
and
we
look
up
the
agent
record
using
the
age
record.
We
look
up
the
project
and
from
that
project
we
know
which
guitar
lee
stores
the
data,
and
we
talk
to
gita
lee
and
fetch
the
file
configuration
file.
A
Then
we
send
the
configuration
file
back
to
agent
k
agent,
k,
parses
it
and
handles
the
configuration.
This
is
how
configuration
is
flowing
and
an
interesting
thing
thing
here.
At
least
it
was
for
me
is
also
importing
the
types
from
another
protofile,
and
you
see
this
is
an
internal
proto
package.
Proto
file,
you
know
package
for
the
file
and
we
import
the
public
configuration
which
is
defined
here
in
this
part
of
us
origin
configurations
quickly.
A
To
this
package
and
type
message,
okay,
then
another
well,
the
second
method
that
cars
that
agent
k
calls
is:
if,
in
the
agent
configuration
let's
go
there,
we
have
deployments
it's
a
list
of
repeated
means.
It's
a
it's
a
list,
our
array,
you
know
slice
and
go
an
array
of
manifest
projects
which
is
an
idea
of
the
project.
Something
like
that
and
so
agent
k
starts.
A
Invokes
this
method
to
get
objects
for
this
repository
in
this
repository,
so
for
each
repository
it
starts
a
separate,
go
routine
and
that
guardian
is
responsible
for
fetching
the
objects
and
synchronizing
them
with
the
cluster
where
agent
case
running
so
as
you
can
see.
Well,
this
is
the
request,
which
is
just
the
project
id.
A
So
it
takes
it
from
configuration
from
here
and
it
sends
it
to
casts
and
then
cast
responds
with
the
stream
of
objects
to
synchronize,
which
is
the
commit
id
and
again
an
array
of
objects,
and
each
object
is
the
yamo
that
is
stored
in
some
file.
This
is
just
the
name
of
a
file
for
error
messages.
There
is
an
error
parsing
the
file.
What
file?
What
was
it?
A
You
know-
and
this
is
just
yammer
bytes-
invites
that
will
parse
and
currently
we
only
support
one
file,
but
we
will
support
like
traversing
directories
and
at
least
multiple
files
in
the
directory.
That's
why
it's
a
array
as
well
repeated
means
multiple,
as
I
said,
and
it's
a
stream
like
here
as
well,
because
agent
k
keeps
the
connections
open
and
the
cass
sends
updates
when
they
happen.
So
cass
is
what
is
doing.
A
Polling
cass
is
what's
calling
italy
for
both
of
the
open
connections
and
when,
when
there
are
updates
a
new
commit
is
detected,
then
it
you
know
handles
that
and
sends
the
update
back
either
for
configuration
or
for
new
objects.
In
the
repository.
A
So
if
there
is
nothing
nothing
that
is
happening,
then
there
is
no
traffic
or
except
for
keep
alive
messages.
Maybe,
but
there
is
no
polling
that
agent
is
doing
so.
This
is
very
optimal.
I
think
polling
is
happening
close
to
the
source
where
the
changes
are,
and
so
this
is
the
this
is
where
the
yeah,
the
pro
this
profile,
is
where
the
interaction
is
defined.
Yeah
there
are
only
two
methods
at
the
moment.
I
will
have
more
some
some
time
in
the
future.
A
A
This
is
just
product
compiler
for
go,
generates,
representation
of
the
messages
and
our
pcs
for
the
for
the
profile.
A
Okay,
I
think
we
can
go
and
look
at
the
okay,
maybe
maybe
a
few
more
things
here,
and
so
internal
agent
k
is
the
source
sources
of
the
agent
apart
from
some
boots
wrapping
code,
that
is
here
and
agent
rpc,
is
what
I
just
showed
you,
which
is
the
jrpc
definition
and
api
is
just
various
api
definitions
for
types
and
structs
and
stuff
that
is
like
globally
used
information
about
the
agent
and
the
question
about
the
guitar
instance
and
information
about
the
agent
that
is
returned
by
the
git
lab
when,
when
cass
asks
about
it
and
information
about
the
githubs
project
that
we
fetch
from
gitlab
as
well.
A
Here
we
are
importing
a
type
from
italy
which
this
type
describes
a
repo,
a
git
repository.
Basically,
so
it
contains
all
the
information
that
italy
needs
to.
Oh
that's
a
lot
of
stuff
all
right.
All
the
information
italian
needs
to
find
the
repository
on
disk
and
anything
anything
else.
Basically,
that
is
needed.
A
A
A
Okay-
and
this
is
probably
it
for
internal.
Let's
start
with,
we
haven't
looked
at
any
chords.
Yet
so,
let's,
let's
try
doing
that,
and
so
this
is,
I
think,
it's
it
makes
sense
to
start
with
commands.
So
agent
case,
as
I
said,
agent
k
the
thing
that's
running
in
the
cluster,
so
that's
what
it
does.
This
is
just
reusable
stuff
that
is
coming
from
here.
Cmd
is
the
name
of
this
packaging.
You
can
see
it
in
the
imports.
A
Unlike
ruby,
there
is
no
magic.
You
can
easily
find
where
things
are,
and
so
this
is
tmd
cmd.
You
can
just
click
and
you
definitely
are
going
to
the
right
place
in
your
sources
or
in
libraries,
and
yeah
is
just
plumbing
to
start
the
program
properly
ship,
the
random
number
generator
start
it
and
exit.
If
there
was
an
error
with
an
error
code
and
then
this
sets
up
a
command
flag
parsing,
and
this
sets
up
sig
term
and
sig
int
handling.
A
A
Then
this
is
like
a
factory
that
produces
something
that
is
run
above
and
this
this
starts
the
application
for
agent
k
by
looking
at
hdk.
So
here
we
just
define
command
line
flags.
This
is
coming
from
kubernetes
libraries
to
buy
into
command
line
flags,
and
then
we
parse
them
and
so
on,
and
this
is
all
just
set
up
original
tokens
credentials.
Establishing
the
cast
connection.
A
That's
really
nothing
very
interesting,
but
the
main
thing
here
is:
we
are
configuring
the
cast
connection.
This
is
the
most
interesting
thing
which
is
happening
here
and
we
are
doing
the
dial
jpc
dial
and
then
we
are
passing
this
connection
to
agent
k.
This
is
the
place
where
the
logic
leaves
and
it's
in
the
internal.
A
A
I
like
to
structure
programs
like
that,
where
you
have
main,
which
is
the
entry
point,
and
then
you
have
the
non-main,
where
everything
else
all
the
logic
leaves
bootstrapping
logic,
not
of
the
logic
but
bootstrapping
logic,
and
then
it
bootstraps,
something
that
is
not
here
but
is.
This
is
a
separate
type.
So
agent
k
the
agent
separate
type
which
doesn't
know
where
the
client
is
coming
from
the
engine
gitops
engine
factory,
where
it's
coming
from,
how
it
works
and
how
to
access
kubernetes
it
just
has
interfaces
and
it.
A
A
So
also
for
this
approach
is
also
for
unit
testing,
and
I
and
I
use
it
a
lot
here
in
tests
and
okay.
So
when
the
agent
is
started,
it
starts
pulling
the
refresh
configuration
endpoint,
the
one
that's
defined
here,
right
get
configuration,
so
this
is
just
an
infinite
loop.
Until
we
get
a
signal
that
we
should
stop
and
we
call
this
method
every
every.
A
10
seconds
and
this
method
is
a
loop
itself.
It
has
a
loop
it's
by
itself,
so
this
is
yet,
as
you
can
see,
get
configuration.
We
are
calling
it
on
the
cusp
client
and
then
we
are
consuming
the
res
response
stream
here
and
applying
the
configuration
that
casts
sent
and
this
configuration
starts
workers
to
do
hitops
but
yeah.
That's.
A
A
Get
objects
to
synchronize.
This
is
the
loop
that
is
consuming
the
stream
from
the
server.
So
if,
if
the
connection
breaks
this
method
exit
exits
but
then
because
we
also
use
it
in
a
loop,
so
we
pull,
but
the
poles
are
very
rare
because
typically
you
connect
and-
and
you
don't
disconnect.
But
then,
if
you
disconnect
for
some
reason,
you
retry
the
connection
every
10
seconds.
A
This
is
how
agent
k
interacts
with
gas
these
two,
these
two
apis,
and
if
we
yeah,
I
don't
want
to
go
deeper
because
that's
a
lot
of
there's
a
lot
of
logic
and
how
how
things
are
handled,
but
that's
not
really
important.
I
want
to
just
show
the
high
level
building
blocks.
So
let's
look
at
cars
because
well,
let's
start
with
command
again
because
has
an
identical
structure
and
main
that's
using
the
same
plumbing
and
then
new
from
flags
also
binds
various
flags.
A
A
Then
we
read
the
configuration
from
the
file
we
apply.
Yeah
load
configuration
profile
apply
default
to
configuration,
then
also
apply
configuration
from
flags.
That's
why
I
want
to
drop
it
because
then
I
can
delete
this
and
then
we
create
this
implementation
of
class,
which
leaves
here
as
well
in
the
same
in
cmg
is
so
we
have
a
like
bootstrapping
thing
that
parses
the
flags
and
configuration
and
we
have
the
options,
which
is
the
options
that
cars
should
work
with,
and
it
just
contains.
A
The
configuration
file
contents
basically
contains
the
contents,
okay
and
it
it's
got
run
method,
which
is
what
starts
the
real
cast
and
it
uses
the
configuration
to
start
the
network
listener
to
establish
connection
to
italy
and
gitlab
client
that
it
constructs
all
the
things
from
the
options
and
then
this
is
where
we
construct
rio
cass
the
server
again,
as
you
can
see,
we
inject
all
the
clients
and
so
guitar
lipo,
something
that
pulls
the
connections.
Guitar
servers,
the
gitlab,
client
and
various
configuration
parameters,
and
then
we
this
is
the
server.
A
A
A
Metrics,
then
it
started
using
the
run
method,
which
is
just
sends
the
usage
being
information,
and
the
interesting
things
here
is
server.
Struct
implements
an
interface.
We
can
go
back
to
the
definition
of
the
interface
that
it
implements,
and
this
is
the
generated
you
see
agent,
rpc,
protobug
go.
This
is
a
generated
file.
A
It
says
that
it's
a
generated
file,
so
this
is
the
interface
that
our
server
implements
and
it's
the
grpc
server
that
calls
this
methods
on
our
server.
This
is
how
we
implement
the
api.
Basically,
this
is
what
is
called
when
agent
k
wants
configuration,
then
another
one.
A
Things:
okay,
yeah:
let's
not
go
into
the
low
low
level
details.
I
think
this
is
probably
a
good
overview,
but
let's
talk
about
what
else
do
we
have?
Let's,
let's
see
what
what
else
we
can
talk
about
and
we
have
the
guitar
client
club
client,
it's
just
nothing,
really,
interesting
and
but
yeah.
Let's,
I
guess
talk
about
make
make
file,
because
this
is
what
you
will
use
to
and
you
know
run
tests
and
so
on.
A
So
if
you
make
changes,
you
can
just
run
test
and
test
is
defined
here.
It
formats
everything
using
go.
Imports
generates
the
built
updates
and
generates
the
build
files
for
bazel
and
runs
the
tests
test.
Ci,
because
ci
runs
this
job,
and
this
is
what
we
run
to
run
all
the
unit
tests
and
build
all
the
packages
so
bazel
test
not
on
the
run
advanced
tests.
It
also
gives
old
packages,
and
this
means
all
basically
and
then
we
also
build
the
integration
test.
We
don't
build
it.
It's
we
don't
run
the
test
here.
A
A
So
if
you
make
any
changes
to
protofiles,
any
of
them
run
this
on
this
task.
It
will.
A
Okay,
what
else
is
it
so?
We
also
have
mocks
which
we
we
generate
using
gomok,
it's
a
library
that
can
generate
a
mock
using
an
interface
given
an
interface,
so
we
generate
quite
a
few
mocks
for
several
things,
and
this.
These
are
the
mocks
that
we
use
to
test
the
business
logic
we
inject
them
in
tests
and
yeah
we
use
go
generate
to
run
commands
in
each
of
the
directories.
A
Pc,
so
we
have
the
doc
file,
which
defines
the
gauging
rate
command
and
go
generate
actually
does
go
wrong,
so
gore
to
go,
generate
delegates
to
go
around
which
builds
this
binary
and
runs
it.
A
It's
like
a
russian
doll
and
we
tell
it
to
generate
mocks
into
this
file
and
use-
and
you
know,
generate
mocks
for
these
interfaces
and
that's
what
we
get
mocks
for
cars,
client
and
so
on,
and
we
have
mocks
for
italy,
client,
italy
pool
sorry,
gitlab,
client,
italy,
po
giveaway
engine
and
for
italy,
that's
another
thing
and
for
all
the
rpcs,
the
interfaces
to
do
jrpc.
We
also
mock
them,
generate
mocks
for
them,
so
that
we
can
test
business
logic
in
cass
and
hdk
and
to
regenerate
them
yeah.
A
I
guess
let's
just
talk
about
basil
a
little
bit,
so
you
can
think
of
a
build
as
a
graph
of
actions
that
you
that
the
build
tool
needs
to
perform
to
produce
a
result.
A
A
So
if
you
think
about
imagine
the
graph
of
of
of
the
of
what
needs
to
be
done-
and
you
tell
basil,
I
want
these
two
vertices
of
the
graph
and
anything
that
they
these
vertexes
depend
on,
gets
viewed,
basically,
recursively.
It's
like
a
depth,
depth,
first
search
but
also
maybe
handling
the
vertex.
Is
you
know,
concurrent
concurrently
for
parallelism?
A
Don't
speed
up
the
builds,
so
basically
our
let's
actually
find
this.
It's
in
cd
agent,
k,
container
race
she's
here,
except
that
it's
not,
but
this
is
the
macro
that
generates
this
target
agent
k.
So
we
can
it's
defined
here,
and
we
know
that
it's
defined
in
this
file
because
that's
where
it's
coming
from
again
no
magic.
We
just
defined
the
file
cmd
sim
db
vcl,
and
it's
just.
We
defined
the
same
targets
for
cast
and
agent
k.
A
That's
why
I
use
a
macro
here
and
we
define
binaries
so
normal
binary,
binary
with
race
detector
on
binary,
that
is
built
for
linux,
mb64
and
the
sendings,
but
also
with
race
detector,
and
then
we
define
go
image
is
a
docker
image
with
a
go
binary
in
it.
That's
what
it
is.
It
uses
rules
docker
to
package
to
download
the
base
based
image
and
then
use
the
binary
that
is
defined
here
underscore
linux.
A
Is
this
binary,
so
we
will
package
an
md64
linux
binary
into
into
a
container
here.
Basically,
that's
what
it
does
and
it's
called
the
container.
So
when
we
we
we
run,
we
want
container
race.
So
it's
the
next
one.
This
one
is
the
linux
binary
with
red
race
detection
detector
on
so,
if
we
ask
bazel
to
build
these
targets,
it
will
you
know,
build
all
the
the
whole
graph
that
they
depend
on,
and
this
is
the
basically
the
idea
behind
brazil
to
to
represent
the
build
as
a
graph.
A
Then
it
also
gives
us
well.
The
build
is
composed
of
reusable
libraries
that
that's
what
we
use
here
like
to
build
docker
images.
You
use
this
library
to
build,
go
use
this
library
so
unlike
make
or
in
shell
scripts,
you
don't
get
low
level
tools.
You
get
the
reusable
rules
which
you
can
compose
and
do
you
know
what
you
need
to
do
so.
A
It
it
gives
you
some
uniformity
instead
of
just
a
free
form,
scripting,
environment,
and
because
it's
more
constrained
it
can
be,
it
can
ensure
that
the
build
is
correct
and
reproducible,
and
because
of
that,
it
can
also
ensure
that
it
can
be
incremental
and
correct.
At
the
same
time
would
make
you
basically
cannot
have
incremental
correct
builds
because
makes
mike
assumes
that
it
relies
on
time
stamps.
A
And
if
you
check
out
another
branch,
you
have
to
do
a
make
clean
and
delete
all
the
build
files,
because
they
are
newer
than
your
sources,
but
then
don't
curse.
They
don't
correspond
to
your
sources
because
you
have
checked
out
and
all
the
branch,
so
you
need
to
rebuild
them.
Make
will
not
notice
that.
A
A
All
these
parameters,
they're
kind
of
more
or
less
obvious,
but
all
of
that
is
really
well
documented
in
the
rose
doctor
repository
on
github
and
we
have
a
lot
of
targets
because
we
have
a
lot
of
image,
different
images
and
we
have
make
targets
for
them.
So
this
is
the
release.
When
we
want
to
release
images
for
a
tag
in
ci,
we
build
all
of
them
and
then
we
run
all
of
them
one
by
one
to
push
the
image
and
then
this
is
this
a
similar
thing,
but
different
targets
for
commit.
A
So
if
in
the
branch
build
you
want
to
view
the
images
there
is
a
manual
task
which
you
can
manually
run
to
get
to
get
images
for
this
particular
commit.
And
then
there
is
the
latest
which
runs
on
master,
to
update
the
latest
image
and.