►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
What
I
would
like
to
see
is
a
to
take
what
we
already
have
on
the
remote
development
Branch.
So
we
already
have
a
you
know:
a
module
there
for
remote
Dev
moneycast
and
on
Agent
K
and
there's
some
polling
and
casts,
and
there's
a
for
Loop
in
Agent
K,
but
to
see
if
we
can
just
Spike
on
transforming
that
into
the
architecture
that
we've
currently
converged
on.
B
Okay,
hello,
which
is
that
agent
K,
will
have
a
polling
Loop
and
it
will
make
a
request
directly
to
rails.
B
B
You
know
a
desired
or
an
actual
State
as
represented
by
the
events,
but
we
don't
have
to
do
that,
but
I'm
just
looking
for
the
communication
and
the
response
will
contain
possibly
some
work
to
be
performed
to
have
it
attempt
to
converge
the
desired
state
with
an
actual
state,
but
just
looking
at
when
I
like
to
help
with,
is
you
said
that
there
is
a
way
that
agent
K,
can,
you
know
directly
call
a
rails
in
point
and
just
have
cast
plot
C
and
through
with
little
or
no
code
on
cast
and
for
the
first
iteration?
B
That's
what
makes
this
most
sense
to
me.
You
know
shakar
is
you
know
definitely
talking
about
if
we
could
have
an
event-driven
approach.
B
There's
definitely
performance
concerns
with
this,
like
a
long
poll
would
be
better
and
then
have
cash
do
the
polling,
but
we
can
refactor
towards
those
you
know
in
the
future.
If,
if
there
are
performance
issues
for
now,
let's
do
it
simplest
thing,
which
is
just
a
poll
driven
by
Agent
K,
less
important?
B
C
B
C
B
C
Yeah,
no,
that
makes
sense
and
thanks
for
explaining
it,
because
I
was
actually
gonna.
Ask
like
I,
don't
get
exactly.
What
are
our
two
options
here,
but
I
didn't
realize
it
was
like
something
where
Agent
K
could
directly
call
rails
or
like
directly
indirectly,
it's
still
through
cast
but
proxies.
B
And
there's
like
we
could
have
cast
Paul
but
like
we
don't
need
to
so,
let's
keep
it
as
simple
as
possible
and
like
I
was
talking
to
Chicago
this
morning
like
yes,
we
could
have
sort
of
an
event
driven
architecture,
but
you
know
that
would
imply,
has
has
some
State
somewhere
there's
something
externally
that
is
sort
of
has
a
scheduling
Loop.
But
this.
A
Oh,
this
looks.
What
we
currently
have
here
is
that
makes
sense.
Yeah
we
get
work,
we
parse
it
and
we
apply
it.
Yep.
B
So
the
part
that
I
was
vague
on-
and
this
was
what
Vishal
and
Guillermo
and
I
were
looking
at
and
Guillermo.
They
definitely
no
go
better
than
me,
but
we
want
this
to
be
like
on
a
one.
You
know
an
in
second
pull
like
one
second
or
five
seconds.
Whatever
we
decide
like.
Where
is
that
happening?
The
the
weight
here.
B
C
Yeah
we
were
sort
of
confused
between
the
polling
that
that
agent
K
was
doing
versus
a
polling
that
Cass
was
doing
on
our
current
code.
I
think
we
shall
copied
it
over
from
from
githubs,
but
I
think
on
on
on
Cass.
It
was
something
like
like
an
RPC
structure,
internal
structure,
that's
part
of
the
project
and
in
here
at
least
so
far,
I
only
see
like
this
infinite
for
Loop.
So,
like
I,
don't
know
if
the
the
waiting
is
happening
somewhere
else
or.
B
B
A
C
B
A
So
if
you
just
want
to
call
rails,
then
let's
look
at
container
scanning
we
this
one,
you
see
it
doesn't
have
the
server
module
started
with
the
factory
and,
let's
start
with
config
and.
A
C
A
A
Not
not
any,
but
it
calls.
Maybe
it's
documented
here
not
now.
It's
not
documented.
It's
documented
somewhere,
I'm
sure,
but
I.
A
Make
a
quest
yeah,
which.
B
A
B
Which
this
is
fine,
I
think
the
part
we're
looking
for
help
on
is
like.
How
can
we
do
this,
like
just
pretend
that
URL
exists
and
there's
something
on
the
agent
K
side
with
the
pole,
with
that
clock
around
it.
A
A
In
class
there's
a
very
similar
API
that
you
are
using
this
is
so
what
you
are
using
is
used
by
this
thing.
B
A
Poorly,
let
me
receive
this
anything
for
body.
No,
there
is
stuff
for
poly
on
the
server
side,
but
on
the
agent
side
you
just
need
your
own
configuration
for
poly
and
which
you
can
just
create
and.
A
Works
on
that,
that's
that
code
exists
on
the
API
as
a
method.
It
only
exists
in
the
server,
but
on
the
client
side
you
can
just
do
what
I
don't
know.
This
thing
does,
which
okay
does
what
yeah.
A
A
Basically,
like
oh
Mr
bombers,
later
interval
well
in
this
case
is
here,
but
this
one
is
a
good
one.
So
interval
is
five
minutes
and
then
like
back
off
Max
back
off
start
from
scratch.
After
there
was
an
error
like
after
two
minutes
and
back
of
factories
to
Jitter
is
one
percent
or
100,
not
sure,
but
yeah
you
can
construct
the
polling
config.
Basically,
then,
you
can
use
that
like
this.
C
A
Jitter
is
the
Jitter
of
the
interval
for
Pauline,
so
if
in
the
fallback
off
as
well,
no
I
think
it's
only
for
backup
actually.
So
if
there
was
an
error,
you
don't
want
all
the
polling
agents
to
keep
boiling
at
the
same
moment,
even
with
backup.
You
want
them
to
spread
a
lot
a
little
bit.
So
if
there
is
an
outage,
you
don't
want
all
the
agents
primer
in
the
API.
A
C
B
A
C
But
I
I
guess
what
like?
What
would
natural
or
like
sensible
value
for
the
Jitter
would
be
because
here
it's
like
one
percent
of
five.
A
Minutes
not
it's
probably
I'm,
not
sure
if
it's
a
percent
or
what
is
this
I,
just
don't
remember:
okay,
let's
go
look
at
the
fact
that
the.
C
Thing
is
if
we,
if
we're
doing
pulling
every
second
or
every
10
seconds
or
whatever
like
the
scale,
is
very
different
than
five
minutes.
A
B
A
C
A
A
So
again,
look
at
the
Factory
in
chart
Ops
the
key
tops
basically
sub
module
and
yeah
just
constructs
the
config
and
then
like
boiling's.
This
basically
and
then
this
function
that
you
provide
is
called
every
when
I
reached
the
right
time.
Basically,
and
then
you
have
the
either
so
it
returns
when
you
return
an
error
or
you
say
that
it's
done
so
the
attempt
result
is
what
your
function
can
return
and
by
continue
means
continue
with
pulling
with
the
interval
or
yeah
with
the
interval
immediately
means,
like
call
me
again
right
now.
A
Basically,
unless
the
context
is
canceled
like
it
checks
the
context
and
back
off
means,
there
was
an
error.
I
call
me
a
bit
later,
depending
on
the
config,
and
that
means
okay,
stop
pulling
like
I
want
to
return
from
the
from
the
pole
with
back
off
and
then,
as
as
you
can
see,
this
thing
never
says
done.
Yeah.
A
Back
off
and
just
continue
forever,
so
this
thing
is
what.
A
Allows
you
to
pull
with
the
interval
like
five,
let's
say
five
minutes,
so
it
tells
the
the
function
to
power
continue
boiling
at
five
minutes
intervals,
also
with
Jitter
or
maybe
no,
each
other
is
only
four
errors.
Actually
I
think
so
continue
in
five
minutes
and
continue
immediately
means
call
me
immediately:
it's
like
a
title.
Okay,
you
know
this.
B
A
B
A
A
A
Yeah,
this
is
on
the
agent
side,
and
this
is
this
is
called
in
a
glow
routine.
The
static
origin,
so
it
just
looks
until
drops
channel,
is
closed,
basically
or
context
is
cancel
context
of
the
job.
So
when
context
of
the
job
is
canceled,
this
aborts
and
it
starts
waiting
for
the
next
job.
Basically,
and
then,
if
jobs
is
close,
this
exits
and
then
the
goroutine
exits.
B
A
B
B
C
I
guess
we
could
spin
off
our
goal
routine
from
within
the
function
within
Paul,
with
back
off
like
if,
whatever
we
get
from
the
answer
from
rails
triggers
are
an
alert
or
a
condition
that
we
need
to
do
some
sort
of
work
or
reset
on
on
the
workspaces.
C
B
Plan
and
that
will
be
in
parallel
and
then
that
work
will
get
kicked
off,
it'll
happen
whenever
it
happens,
and
then
every
Loop
is
like
what
are
the
new
events
for
every
workstation
that
I
care
about.
Okay,
take
these
events
and
we're
going
to
send
those
as
the
the
new
actual
State
back
to
rails.
So.
B
B
There's
nothing
that
would
be
blocking
I
mean
we
have
the
choice
to
do
the
product,
like
the
processor
we
have
to
do,
is
to
transform
the
kubernetes
events
into
like
a
representation
of
the
actual
state.
So
presumably
that's
like
a
fast
operation.
So
my
initial
inclination
is
to
do
that.
All
on
the
rail
side,
like
we
on
the
ring
site
to
make
the
Aging
K
as
as
dumb
as
possible
right.
So
then
what
rails
has
to
do
is
basically
it's
just
processing.
B
Whatever
information
it
got,
it
could
be
the
relevance
it
could
be
pre-processed
before
they
get
them
either
way
and
apply
them
to
the
database
and
then
do
a
query
to
say:
okay,
based
on
the
current
state
of
every
workstation,
where
the
actual
and
the
desired
or
different
do
I
have
any
work
requests
to
send,
and
then
it
will
generate
that.
But
that
is
all
there's
like
no
remote
calls
it
should.
It
should
be
basically
an
instantaneous
request
as
fast
as
the
database
can
respond
to
it.
C
Yeah
the
requests
they're,
like
the
only
requests
that
rails
could
make,
would
be
like
start.
This
new
workstation
stop
this
one
restart
this
one.
It's
probably
not
that
complicated.
B
Stuff
like
that,
so
the
a
synchronous
request
and
the
benefit
of
doing
it
in
a
single
request
is
because,
when
you're
listening
for
the
information
like
that
may
contain
some
new
actual
State
and
then
you
generate
the
work
based
off
of
that
latest
information
that
you
just
got
and
then
we're
also
keeping
a
time
stamp.
You
know,
will
echo
through
a
time
stamp
of
for
each
Workstation.
B
What
is
the
events
we
have
processed
that
have
been
successfully
applied
and
recognized
and
processed
on
the
rail
side?
And
then,
when
that
comes
back
to
the
agent
K
side,
then
it
will
know.
Okay,
all
this
is
done
now.
I
can,
on
the
next
poll,
send
any
new
events
that
are
relevant
to
that
workstation
and,
of
course,
each
agent
it'll
be
an
array
of
of
workstations.
B
However,
many
of
it's
managing
the
same
information
with
the
timestamp
for
each
one
and
info
and
work
coming
for
each
one,
so
they
will
be
large,
potentially
large
payloads,
but
these
are
not
a
lot
of
traffic.
So
that's
the
the
other
point
that
Chicago
brought
up,
but
we
can.
We
can
do
the
math
like
if
we
pull.
You
know
once
a
second,
that's
360,
you
know
an
hour
or
whatever
have
whatever
that
traffic
is
multiplied
by.
However
many
you
know
warrant
stations
we
have
across
all
of
the
agents
and
there's
multiple
levers.
B
B
B
C
B
B
B
Has
already
started
yes,
yeah,
an
actual
state
that
is
revented
like
it
it'll
just
rewrite
it
in
the
database.
It's
fine,
it's
item,
potent
either
of
them
can
receive
duplicates
and
then
the
one
thing
is
this
left
is
like.
Well,
what
if
you
know,
eight
cast
or
Agent
K
or
kubernetes
or
whatever,
like
just
drops?
One
of
these
requests
it's
gone.
B
There
will
be
on
the
rail
side
like
if
there's
some
State
change,
that's
expecting
like
I'm
waiting
on
this
works,
that
station
space
would
be
provisioned
and
like
I,
have
there's
a
time
stamp
for
like
when
I
got
the
last
actual
state
of
this.
You
know
it
started
going
from
provisionally
requested
to
provisioning.
B
B
C
Oh
well,
please
help
me
out
with,
like
the
infrastructure
part
like
we
would
have
like
one
agent
per
a
kubernetes
cluster
and
each
client
could
have
one
or
many
kubernetes
clusters.
B
C
B
B
A
B
B
So
like
this
is
the
high
level
of
it
right.
The
agent
K
is
subscribing
to
the
events
against
the
events.
It's
no
longer
a
long
call
like
that.
That
has
changed
with
this
idea
that
we
can
actually
get
through.
So
it's
just
gonna
post
this
request.
It's
got
any,
you
know
actual
State,
that's
the
better
terminology,
but
whatever
status
info
it
has,
and
then
rails
implies
that
in
the
database.
B
It
will
still
be
polling
for
any
new
events,
so
like
once
that
workstation
is
provisioned
and
has
started
we'll
get
that
event,
and
then
that
will
be
sent
back
over
and
reflected
here
in
the
rails
database
and
like
this
is
just
a
little
bit
more
of
a
drill
down
to
the
details
of
how
that
worked,
but
still
not
with
the
timestamp.
So
I
wanted
to
make
some
additional
diagrams
like
representing
how
the
timestamps
work
and
to
your
question
Guillermo
like
yeah.
B
Here's,
the
monolith,
there's
multiple
castes
each
cast
has
the
multiple
agents
talking
to
it,
and
each
agent
can
be
responsible
for
multiple
workspaces
and
here's
sort
of
the
the
state
that
we
did
so
like
there's
a
desired
State
and
an
actual
State
like
this
is
represented
in
postgres
and
every
request
we
like
knew,
do
a
query
to
get
where
are
they
different
and
is
this
something
that
I
need
to
tell
generate
some
work
for?
In
some
cases
it
won't
be
like
if
the
desired
state
is
started
and
the
actual
state
is
provisioning?
B
It's
like
okay,
I'm
waiting
for
this,
but
then
maybe
there'll
be
a
time
stamp
Associated.
Here.
It's
like
okay,
if
I've
been
waiting,
five
minutes,
I'll
go
ahead
and
resubmit
a
you
know,
provisioning
request
or
you
know,
I
started
requested
state
and
so
no
premature,
optimization.
My
assumption
is
like
this
will
scale.
It's
like
the
number
of
work
stations
everywhere
times.
However,
often
we
pull
is
what
will
actually
be
hitting
the
rails
API
like
I
know,
we
have
a
lot
of
high
traffic
endpoints
and
a
lot
of
high
traffic
tables.
C
Yeah
Also
regarding
because
a
question
earlier,
like
not
all
gitlab
agents
will
have
remote
development
environments
set
up
with
them
like
there
could
be
multiple
casts,
as
you
mentioned,
and
maybe
only
one
is
tied
to
a
kubernetes
cluster
that
has
the
the
remote
development
environments
or
it
could
be
the
case
that
a
client
specifically
doesn't
have
that
functionality
or
hasn't
provided
their
own
like
development,
environment,
Etc
right.
So
maybe
we
should
have
a
way
for
agents
to
know
whether
they
need
to
be
bowling
or
not,
and.
B
A
That's
what
you
get
in
the
via
the
channel.
That
gives
you
the
configuration
objects.
If
you
open
your
module,
you
will
see
that
object
that
you
get
I
can
show
you
how
you
can
open
that.
That's
final,
stop
sure.
B
A
Particularly
forecasts
or
so
forecast,
the
configuration
is
static,
so
cast
starts
and
class
with
the
configuration
from
a
file
in
some
environment
tables,
and
that's
it.
But
the
agent
is
dynamically
reconfigured
by
through
this
channel.
Basically,
agent
is
given
the
configuration
that
is,
is
the
configuration
Fire
Plus,
some
some
extra
things
here:
agent,
ID
project
ID
and
that's
the
end.
The
rest
is
coming
from.
This
is
coming
from
configuration
file.
A
So
you,
if
you,
if
you
add
stuff
to
configuration
file,
that's
when
it's
enabled
or
disabled,
that's
so
I
mean.
Presumably
the
user
would
want
to
configure
the
agent
to,
for
example,
create
workspaces
in
a
particular
namespace
right,
so
that's
configuration
and
that
can
be
if
it's
configured,
then
that's.
A
That
means
this
module
should
be
enabled
if
you
want
to
do
it
this
way
or
but
like
starboard
vulnerability
mode.
You
always
post,
for
example,
because
configuration
for
the
security
scanning
can
be
added
by
the
user
interface,
and
for
that
it
does.
There's
no
need
to
enable
it,
but
there
is
also
configuration
if
you
enable
it
like
that,
but
that
creates
I
mean
how
many
people
will
use
this.
A
As
a
percentage
of
the
agents-
probably
not
everybody,
like
maybe
I,
don't
know
less
than
half.
Definitely
so
everybody
will
be
creating
a
lot
on
the
infrastructure,
because
that's
not,
but
on
that,
that's
not
opting
that's
kind
of
in
the
you
can
also
opt
out,
but
we
also
have
the
idea
that
everything
is
enabled
by
default
right
so
I
don't
know
what
needs
to
I.
C
Mean
we
could
add
the
configuration,
but
have
the
default
be
it
is
on,
and
then
people
can
opt
out
of
their
agent
doing,
polling
or
more
development,
which
I
guess
people
would
only
if
that
default
is
for
it
to
be
on
people
would
only
go
in
to
to
to
disable
it
if,
like
they're
debugging,
something
or
like
the
infrastructure
is
getting
hit
hard
with
lots
of
requests
or
whatever,
in
which
case
we
should
yeah,
either
way
like
change
a
bit
of
how
frequently
we're
pulling
or
or
whatever,
but
so
yeah
I
don't
know.
B
And
so
shekhar
has
he
wants
to
talk
about
an
alternate
architecture
that
is
event
driven
where,
like
events
are
coming
from
the
rail
side,
and
then
you
know,
information
coming
from
the
agent
K
side
and
Cass
is
reconciling
this,
which
that
seems
like
it
will
definitely
address
the
load
right
and
the
the
chatting
is
and
and
event
driven
architecture.
But
it's
conceptually
more
complicated,
like
you
said
when
we
initially
proposed
that
the
polling
is
easier
to
reason
about.
B
But
if
we
have
lots
of
levers
we
can
reduce
the
polling,
we
can
do
whatever,
but
we
can
also
say
all
right.
This
is
still
the
same
basic
architecture,
but
like
it's
not
going
to
send
any
data
from
Cass
unless
we
have
it
and
then
use
the
other,
you
know
RPC
direction
to
you
know
it
will
always
respond
to
any
new
state
with
the
work.
But
if
there's
new
work,
that's
just
initiated
by
a
user
like
create
a
new
workstation
on
the
fly
from
scratch.
There's
no
there's
nothing
triggering
that
to
happen.
B
It's
not
responding
just
from
an
existing
state
on
the
agent
K
side,
we
could
just
have
like
an
influence.
That
is
hopes
it
right
who
rails
through
the
grpc
interface
to
cast
through
the
agent
K
and
say:
hey
Agent
K.
Just
do
a
poll
right
do
a
no
off
to
kick
off
this
Loop
and
then
it
will
receive
okay,
yeah.
C
B
A
Time
ago
the
only
difference
is
I
would
poke
cast,
not
Agent,
K
and
Castle
would
do
an
immediate
call
and
Agent
K
would
just
like
githubs
just
wait
for
a
reply
and
then
that
poking
can
be
rails,
calling
the
jrpc
endpoint
of
cast
and
cast
doing
a
broadcast
for
all
cast
instances
through
a
radius
so
that,
like
hey
cast,
that
has
that
is
poorly
for
workspaces
for
agent
IG
10.
B
C
B
Yeah,
it's
like
faster
to
iterate.
If
the
the
shapes
of
these
you
know,
Communications
the
only
place
we
have
to
change,
it
is
like
on
the
agent
K
side
and
on
the
rail
side,
and
then
we
iterate
and
we
like
okay,
it's.
This
is
how
it
works.
Then
we
can
say
all
right:
how
do
we
optimize
this
like
and
then,
if
we
have
to
make
tasks
smarter-
and
it
knows
about
the
structure
of
these-
or
maybe
it's
got
its
own
polling
like
that-
could
be
in
the
future,
but
there'll
be
less
churn.
C
B
Right,
which
is
why
one
of
the
priorities
I
think
should
be
like
have
you
know,
get
it
on
air
X,
Radar
somebody's
radar?
Let's
do
some
back
of
the
napkin
math
like
how
many
customers
do.
We
expect
how
many
workstations
over
what
period
of
time-
and
you
know
with
this
polling
interval.
This
would
be
X
load
and
also
like
how
big
are
these
payloads
like
if
we
just
sent
over
the
raw
events
from
kubernetes
filtered
out
and
let
rails
do
those?
Would
that
be
like
a
Meg
per
Quest?
A
A
B
C
Just
talking
about
a
configuration
right
like.
C
Just
full
level
on
off
for
for
remote
development
on
a
particular
agent.
C
What
we
were
talking
about
earlier,
like
having
some
sort
of
configuration,
whether
the
default
is
it's
on
or
the
default,
is
it's
soft,
but
having
it
in
the
configuration
whether
we
want
a
particular
cast
to
to
to
enable
its
agent
case
to
to
start
polling
for
remote
development
environments
or
not.
A
C
A
So
imagine
you
have
a
platform
team,
they
around
10
agents
like
two
in
production
and
I,
don't
know
doing,
staging
and
other
whatever
testing
agents.
Right
then,.
C
A
And
and
then
you
know
they
Grant
access
to
developers
only
to
certain
agents
in
certain
environments,
Yeah
by
filtering
by
environment,
is
going
to
be
in
this
release.
I
think
on
the
next
release.
B
A
That's
what
we
can
build
on
to
it's
always,
and
it's
enabled
if
it
was
enabled
by
the
admit
so
it
becomes
an
opt-in,
and
this
would
be
consistent
with
other
features
guitars.
It's
often
SAR
access
is
obtained
most
for
most
of
the
projects
like
it's
only
enabled
for
the
agents
project
by
default
out
of
the
box
and
for
others
it's
opt-in,
so.
A
C
The
only
difference,
at
least
for
the
initial
revision,
basically.
A
That
should
start
that
polling
yeah
that
doesn't
change
the
architecture
it
just
and,
if
condition
basically
should,
should
this
happen
or
should
I
just
like
don't
do
anything,
you
know
yeah.
A
Or
they
don't
have
leader
election
because
is
all
it's
a
single
cluster
of
things
they
talk
to
each
other,
but
logically
it's
a
single
thing:
there's
any
of
them
service
any
agent
right,
the
it
doesn't
create
any
problems.
A
So
in
the
agent
we
have
the
leader
election
so
for
let's
say
agent
Ka,
it's
like
agent,
ID,
1
and
Agent.
B
is
Agent
id2
they're
separate
agents,
but
each
of
them
can
have
multiple
pods
because
the
user
deployed
them
for
whatever
reason
right.
So
each
of
them
can
each
of
those
10
agent.
Ka
will
do
the
same
thing
right
because
they
will
all
get
the
same
configuration.
A
It
can
create
problems
for
some
features
it
doesn't,
but
for
some
it
may
so
for
some
for
those
that
you
don't
want
to
do
the
same
thing.
You
do
lead
the
election.
This
is
available
in
Agent
K,
already
githubs
uses
that
container
scanning
uses
that
I
think
that's
it.
Maybe
I.
A
B
And
we
want
like
large
payloads,
are
okay
like
these
are
going
to
be.
You
know
wide
area
like
their
they're
wide
area
connections,
but
they're
going
to
be
good
connections.
You
know
data
center
to
Data
Center,
or
something
like
that.
Ideally,
so
you
can
send
large
payloads,
but
fewer
requests.
That's
okay
in
general.
A
B
A
This
is
again,
this
is
how
GitHub
sorts
engine
K
connects
and
waits
for
has
to
reply
cast
replies
only
when
it
has
something
to
reply
with
and
then
connection
times
out.
Agent
carry
connects
like
every
half
an
hour,
so
right.
A
C
B
And
so,
if
we,
if
we
go
with
the
philosophy
that,
like
as
much
of
the
logic
as
possible,
is
going
to
live
on
the
rail
side
right,
even
the
parsing
of
the
events,
maybe
but
definitely
the
logic,
to
move
things
through
states
to
compare
desired
versus
actual
decide.
What
work
needs
to
be
requested.
If
that
all
lives
on
Rails.
B
It's
more
of
you
know
not
simple,
but
it's
more
of
a
straightforward
architecture,
change
to
say:
okay,
we're
moving
from
the
polling
being
on
Agent
K
to
Kaz,
and
it's
not
that
big
of
a
deal,
because
the
you're
not
moving
processing
logic
around
and
you're,
just
changing
where
the
polling
is
happening.
But
it's
still
essentially
the
same
data
being
passed
through
yeah.
B
B
A
So
I
guess
we
recap
I!
Guess
we
want
configuration
for
opt-in,
then
the
painting
on
that
in
the
module.
You
would
either
start
the
loop
or
not
start
the
loop.
You
would
implement
the
leader
election
module.
That
means
extra
method
on
the
module
that
checks
configuration,
object
and
returns.
True
false
if
it
wants
to
run
it
returns.
True,
if
it
doesn't
run,
turns
false
that
you
can
see
how
this
works
in
the
github's
module
again,
you
can
share
the
screen
here.
A
B
B
B
A
A
This
run
is
the
same
as
this
run
and
this
interface
embeds
this
interface,
but
it
just
documents
how
this
is
different
versus
this,
but
basically
you
will
look
at
at
the
config
like
this.
Here
is:
if
there
are
manifest
projects,
then
it
returns
true.
So
this
is
executed,
and
then
this
looks
at
the
Manifest
projects
and
well
Works
looks
at
the
config
basically
and
blah
blah
blah.
You
will
be
doing
something
like
that.
B
A
A
The
14
as
well
so
three
things
yeah
run
defaulting
the
name
of
the
module
and
then
you
also
have
the
another
configuration
and
there
is
another
one
and
another
module.
A
B
A
Thursday,
the
first
day
off
but
I'm
off
for
two
days
on
Monday
I'll,
be
back.
Okay,
I
think
unless
I
have
I
have
requested
more
leave,
but
it
will
be
in
a
week
so
next
week
I
will
be
working
and
then,
after
that
there
are
holidays.
B
B
C
A
No
problem
so
I'm
thinking
of
again
for
the
future,
not
right
now
like
this,
this
API
to
make
the
request
from
the
agent
I'm
thinking
of
adding
a
new
method
there
to
do
what
we
have
discussed,
basically
to
do
the
long
polling
between
agent
and
class
and
cast
doing
their
actual
tight
poly
Loop,
but
also
make
it
so
that
on
the
cast
side,
this
is
not
a
previewed
module
like
the
one.
That
is,
for
the
simple
request
response
module.
A
A
C
C
A
Yeah,
so
this
gitla
boxes
handles
the
make
request
that
comes
from
the
agent
and
that's
the
that's:
the
request
to
rails
so
I'm
thinking
of
building
something
like
this,
but
not
as
a
previewed
module
so
like
this
Cloud
boxes.
But
that's
a
library
so
that
on
cast
side
in
your
remote
Dev,
you
can
use
this
as
a
building
block,
but
then
make
it
easy
to
plug
in
the
notification
mechanism
so
that
when
class
is
poked
by
rails
and
broadcast
identification
and
then
potentially
another
instance
of
cast
receives
that
notification.
A
It
can
poke
kind
of
itself
to
do
the
the
another
poll
basically
and
because
this
might
be
useful
for
other
things
and.
A
Like
for
leader
election,
that's
like
I,
built
it
as
a
reusable
Machinery
in
in
the
agent
and
then
multiple
modules
use
that
and
for
for
this
I
think
it
would
be
better
to
also
build
it
as
a
reusable
machinery
for
future
potential
reuse.
But
again,
that's
for
later
just
I'm,
just
sharing
thoughts
and
another
thought
I
had
is
that
polling
that
Cass
would
be
doing
a
tight
polling
Loop.
A
If
it
doesn't,
it
can
say
no
content,
you
know
204
and
if
it
oh,
maybe
there
is
a
special
reply,
actually
not
sure,
probably
no
content
and
then,
if
it
does
have
something
to
reply
with,
it
would
reply
and
say
Here's
a
new
e-tag
and
then
that
generic
polling
logic
in
Castlewood
would
not
need
to
well.
It
can
stay
generic.
Basically,
it
would
not
need
to
understand
if
it
should
return
or
it
should
not
return.
This
particular
reply.
A
A
A
So
class
would
be
pulling
at
the
same
intervals,
but
it
would
just
do
HTTP
level.
It's
like
State
attached
to
the
connection
itself.
I
guess
you
can
view
it
like
that
and
then
that
e-tag
would
be
returned
to
agents.
It's
like
for
github's
cast
Returns
the
commit
ID,
then
agent,
when
it
falls
with
a
outer
loop
pulling
it
resends.
The
commit
I
did
an
agent
then
cast
pause
Italy
with
that
commit
ID.
A
C
A
It's
yeah,
it's
the
HTTP
caching
part
of
caching
button
yeah.
Nothing
is
actually
cached
somewhere
right.
Oh,
you
can
actually
cache
something
in
on
the
rail
side,
but
that's
how
kind
of
that's
the
internal
implementation
detail
of
how
you
would
reply
with
no
content
or
here
is
content.
You
know,
I
mean
the
damn
version.
Would
compute
the
result,
hash
it
and
that's
how
you
would
get
the
e-tag
and
if
that
matches
the
attack
that
the
client
sent
that
you
would
just
say.
No,
no
reply.
B
A
Response
to
all
four,
but
that's
still
doing
work
on
the
a
server
side.
Then
you
can
do
a
bit
I,
don't
know
some
somehow
recording
rate
is
that
there
were
no
changes
to
this
agent
ID
or
something
like
that,
since
this
timestamped
and
then
Paulie
would
check
a
radius
and
see
that
there's
nothing
to
do
and
not
even
touch
the
database,
but
that's
all
in
for
the
future.
Of
course,
I'm
just
now.
Thinking
about
it.
B
A
B
B
Okay,
I
think
it's
good!
Thank
you
so
much
thanks.
We
evinerating
on
the
architecture
we're
getting
closer
every
time.
C
B
All
right,
bye,
bye,
you're
gonna,
share
this
in
the
remote
Dev
Channel
yeah.