►
From YouTube: Geo Design Catchup
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Basically,
what
I'm
proposing
here
is
to
standardize
how
we
produce
events,
how
you
consume
the
events
and
trying
to
get
away
of
the
all
of
the
schedulers
that
you
have
today,
making
our
event
system
more
readable.
So
you
can
trust
in
it,
make
sure
that
we
can
handle
all
the
events
and
you
can
process
them.
You
don't
need
to
have
those
schedulers
doing
expensive
queries,
not
debates
to
find
out
what
I
mean
we
need
to
sink
our.
We
need
to
hit
verify
this
kind
of
stuff,
so.
A
B
Okay,
so
I
think
that
I
tried
to
explain
the
diagram
I
think
that
most
of
the
questions
here,
because
it's
not
so
clear
or
each
parts,
are
responsible
for
the
calculator
trying
to
do,
and
so
here,
based
on
on
the
primary
node,
you
do
not
have
a
lot
of
changes.
Basically,
we
have
just
standard
H
produced
the
events
with
common
interface
for
all
types
of
difference
that
we
need.
C
B
I,
don't
have
a
formal
proposal
for
how
it
should
start
the
data
in
the
database.
It
is
open
for
discussion
I'm,
not
sure
if
the
best
ways
to
keep
the
way
they
are
doing
today
with
a
lot
of
tables
or
I,
should
add
on
one
table
with
JSON
database
a
column,
sorry,
let's
start
a
payload,
and
but
this
is
open
for
discussion.
So
here
this
will
be
the
change
here
on
the.
How
do
we
start
it?
Evens
I
found
I
think
that
it
can
starting
using
what
we
have
today.
B
It's
not
to
not
be
so
subchief
what
we
have,
but
here
the
director
is
like
the
our
dual
aquifer,
but
his
responsible
only
to
stream
the
events
from
the
database,
for
example,
they
will
know
that
I
knew
even
arrival
on
the
second
node.
You
create
this
event
as
a
object
when
you
put
it
on
the
even
tweet
they
even
be
like
the
two
Mike
asked.
B
B
So
this
grid
here
is
will
be
like
a
quickie,
it's
not
in
memory,
because
I
would
like
to
persist
this
state
in
the
regice
in
case
of
some
failure
or
the
juice
secondary
mode.
If
you,
if
you
have
some
failure
here,
we
have
all
the
data
on
the
raddest
and
you
can
hit
rotate
these
events
when
the
Geo
second,
nano
is
up-
and
here.
B
Let's
see
how
it
can
swing
is
it's
not
the
sidekick
workers,
like
you
mentioned,
it's
something
that
we
have
this.
We
in
memory,
and
you
have
some
workers
like
a
real,
do
log
cursor
that
are
picking
this
event
to
process
so
can
have
some
power
priorities
here.
If
you
need-
and
you
can
have
a
lot
of
improvements
here
for
what
we
need-
something
that
for
automation
here
only
even
if
I
all
this
events
in
memory,
we
can
compress
our
backlog
of
events
during
each
process
as
well.
B
Person
will
have
their
events
to
seek
the
each
lab
project.
You
can
compress
this
to
have
only
one
either
and
the
final
result
will
be
the
same,
but
I
know
that
you
can
do
this
for
some
kind
events,
but
you
can
look
for
another
and
events,
but
I
feel
that
the
most
common
events
that
we
had
on
in
our
database
is
the
post
are
stinky
update,
even
let's
just
just
think.
A
So
can
I
can
I
ask
a
question
there
about
the
about
reading
the
about
the
queue
and
reading
the
events
out
of
the
database.
So
at
the
moment.
So
with
this
in
this
diagram,
there
is
a
piece
of
code
that
reads
the
information
out
of
the
database
and
puts
it
into
the
event
queue,
but
that
piece
you're
proposing
to
actually
use
the
different
piece,
that's
built
into
post
grades
in
terms
of
streaming
out
the
information
that's
there
rather
than
performing
the
read
the
read
queries
to
get
the
data.
That
is.
A
A
A
Because
when
I
saw
this,
the
the
first
thing
that
popped
into
my
mind
was
the
queue
is
moving
from
the
database
into
memory
that
we
are
using
the
post,
great
replication
to
to
transport
the
queue
information
to
the
secondary
node.
But
when
I
start
to
read
about
eventing
and
messaging.
In
this
context,
it
feels
like
we
could.
We
could
actually
use
eventing
from
the
primary
node
directly
to
the
secondary
and
bypass
the
replication
step.
A
I
mean
yes,
we
would
still
need
to
replicate
the
database
across,
but
using
eventing
from
from
the
from
the
point
of
the
primary
into
an
event
queue
so
putting
it.
So
the
when
the
events
are
created
on
a
primary
they
get
put
onto
a
system
like
RabbitMQ,
where
you
can
then
push
the
message
to
multiple
secondaries
and
the
secondaries
read
the
message
and
processes
on
each
secondary.
A
B
C
Also
have
a
question
here
so
by
looking
at
what
what
we
have
today
is
that
we
start
start
all
the
events
in
in
one
or
more
tables.
We
use
the
replication
to
transfer
that
information
from
the
primary
to
the
secondary.
Then
we
use
the
cursor,
we
read
them
and
right
after
we
read,
then
we
create
a
sidekick
jobs.
So
this
is
the
correct
architecture.
C
For
the
next
one,
instead
of
reading
events
on
the
secondary
and
creating
the
psychic
jobs,
you're
proposing
that
we
store
it
somewhere,
I
can
be
in
memory
or
can
be
ready.
And
then
you
have
a
separate
process
for
getting
from
this
data
store
and
executing
it.
But
all
right,
it's
hard
for
me
to
not
seen
this
as
we
implementing
a
background
job
system
in
parallel
like
if
we
have
a
queue
of
things
and
we
have
a
worker
that
consumes
that
and
then
it
executes
it.
C
B
B
We
also
attracted
even
gapping,
and
here
I'm,
just
trying
to
break
this
into
mark
smaller
classes
with
a
single
responsibility.
Recently,
this
this
guy
is
responsible
only
to
Reddit
it.
The
events
from
the
database
put
it
on
the
Queen
and
I
have
another
one
that
it's
basically
the
log
crusher
here.
That
is
just
read
the
events
here
and
schedule
the
jobs
as
we're
doing
today
so
and.
B
This
is
the
the
base
changing
the
peritectic,
it's
not
so
high,
but
just
to
make
every
piece
with
only
one
single
responsibility.
For
example,
today
our
direct
our
jewel
aquifer
needs
to
know
how
to
read
a
database.
The
events
from
database
needs
to
know
how
to
track
the
even
getting
needs
to
know
how
to
scale
a
job.
They
need
to
know
a
lot
of
things
so
now
we're
just.
B
We
don't
need
to
have
the
backfilling
work
running
every
time
we
can
trigger
a
backfill
in
front
a
second,
so
the
primer
can
start
in
treating
some
agents
or
having
the
second
and
run
this
back
filling
worker
on
and
fueled.
It
isn't
we
and
in
case
of
failures,
I'm
another
changes
in
case
of
failures
have
inferred,
was
Persinger
episode,
verification
or
download
of
an
artifact.
That
is
not.
You
have
the
scheduler
running
acquire
between
the
second
order
database
and
the
tracking
database.
B
We
are
putting
are
not
events
inside
the
Queen,
so
this
is
the
big
difference
today,
because
our
director
here
is
responsible
to
read
on
the
database,
but
in
case
you
have
a
failure.
We
don't
need
to
track
this
in
the
database.
The
the
event
itself,
the
the
job
can
just
put
another
even
to
the
Queen
and
assume.
As
that
another
job
is
available.
You
try
to
sneak
the
a
pastor
again,
we
care
right
of
the
database.
You'll
find
the
failures.
B
C
B
Yeah,
but
the
problem
is
that
we
need
to
basically
it's
not
a
readable
job
processing
system.
It's
a
really
ball
isn't
processing
system,
it's
almost
the
same,
but
mm-hmm
events
training.
So
we
need
to
make
sure
that
we
are
processing
the
events
in
that
readable
fashion
way.
So
then
we
are
processing
the
events.
We
make
sure
that
we
not
lose
any
events.
Then
we
starting
treating
the
jobs.
It's
not
a
result.
The
event
system
should
to
make
sure
that
the
job
is
running.
We
know.
C
I
get
that
I
mean
that
rabbit
rabbit
MQ
and
a
few
others
that
you
mentioned
earlier
on
the
issue.
They
can
be
use
it
as
a
message
event,
browser
event,
broker,
etc,
and
some
of
them
have
like
this
reliability
characteristic
is
that
we
need,
and
the
whole
idea
is
that
if
we
find
something
that
works
for
us
and
has
like
this
kind
of
feature
that
we
need,
we
can
remove.
The
remove
Oscar
is
entirely
from
from
the
the
the
diagram.
So
we
keep
the
we
keep
the
the
events
generation
on
the
it.
C
It
feeds
like
multiple
channels,
like
per
one
one.
Three,
multiple
topics,
one
for
each
geo
secondary
that
gets
replicated
or
that
get
transmitted
to
the
other
side.
Each
one
consumes
that
and
then
so
that
this
whole
thing
is
like
from
from
the
secondary
director
event,
queue
and
the
workers,
and
now
you
have
something
that
just
triggers
the
the
like.
The
consumers.
C
A
That
that's
what
I
was
trying
to
yeah
that
that's
what
I
was
trying
to
get
it
earlier
was
that
it
sounds
like
the
in
the
diagram
that's
shown
here.
It
feels
like
the
queue
is
just
moving
out
of
the
database
into
a
into
a
more
standard
queuing
mechanism
that
you
could
populate
directly
out
of
the
primary.
A
At
the
moment,
the
postcode
replication
is
really
responsible
for
moving
the
queue
to
a
secondary,
but
the
events
could
be
published
straight
out
of
the
primary
on
to
processes
that
exist
on
the
secondary
nodes
that
then
populate
the
database
or
backfill
in
or
Berthe
that
process.
All
of
the
events
on
on
each
secondary.
B
A
B
For
example,
when
we
receive
an
event
on
wrapped
in
McGill
or
on
Kafka,
for
example,
we
received
even
to
synchronize
the
ditch
lab
a
poster,
but
you
have
some
replicate
database
application
lag.
You
don't
have
the
information
that
the
base,
yet
how
you
can
handle
all
this
I
think
that
this
is
something
that
you
need
to
to
do
some
mitigation.
C
Can
I
propose
something
here,
I
think
you
like
you're
on
the
right
direction
with
what
you're
thinking
here,
but
maybe,
instead
of
thinking
it
has
like
individual
components
that
will
be
implemented
as
a
code.
Try
to
identify
all
these
characteristics
that
this
magic,
cue
that
we
want
to
use
should
have
like
other,
for
example,
it
shouldn't
be
able
to
handle
the
case
where
you
have
like
ten
updates
to
remove
sorry
and
he
needs
to
deduplicate
to
just
one.
C
For
example,
all
these
things
I
think
it
will
be
better
to
like
it
will
give
us
more
room
to
either
implement
it
or
move
to
an
existing
solution,
because
if
we
understand
all
the
requirements
that
we
need
to
fulfill
its
it's
easier
to
think
about
I
also
I
was
thinking
that
let's
say
that
we
decide
to
implement
it
or
we
decide
to
use
random
key
or
something
like
that.
We
should
probably
also
consider
that
some
users
will
install
this
queue
solution
and
some
will
want
to
use
something
like
Amazon,
SES
or
etcetera.
C
A
Queue
in
place
doesn't
change
expensive
queries,
but
looking
at
a
different
way
to
stream
out
of
the
database
does-
and
maybe
it's
worth
actually
splitting
this
into
two
separate
proposals.
Saying
we
can
change
how
we
read
out
of
the
database
and
then
we
can
change
how
we
post
these
events,
and
that
might
also
make
it
easier
to
unmake
either
of
those
changes
and
easier
to
evaluate
the
two
things
separately,
because
I
do
think
that
there
are
separate
things
but
I'm
open
to
they're
being
wrong.
B
A
So
it
sounds
like
the
database.
One
is
the
one
that
is
I,
don't
to
say
easier
to
accomplish,
but
it
sounds
like
they
list
moving
parts
to
accomplish
that
piece,
then
to
start
evaluating
queuing
systems
and
what
that
needs
to
look
like.
But
I
think
I
have
a
feeling
that
the
eventing
site
is.
It
feels
like
one
of
the
it
feels
like
a
correct
destination
to
go
through,
even
if
all
of
the
pieces
aren't
aren't
aligning.
A
Yet
it
feels
like
that's
the
right
way
to
be
thinking
about
the
system
to
be
reading
and
processing
events,
but
in
terms
of
moving
forward
from
this
I
think
we
need
to
set
to
split
this
proof
of
concept
in
half
and
do
each
of
the
pieces
independently
of
each
other
and
when
thinking
about
the
coding
changes
that
you
were
proposing,
I
suppose
those
coding
changes
are
more
around
the
processing
of
events.
The
the
creation
and
processing
of
events
then
about
streaming
aspect.
Yes,
this.
A
C
A
Then
the
first
piece
is
streaming
out
of
the
database.
The
second
piece
is
changing
the
code
to
use
producers
and
consumers,
and
then
the
third
thing
is
actually
to
use
eventing
instead,
time-skip,
okay,
so
I
think
the
right
thing
is
I'll.
Put
a
comment
on
the
ticket
saying
that
this
was
a
link
to
the
recording
that
this
is
basically
where
we
came
to
Hitler
into
the
discussion
was
if
this
is
actually
spawning.
A
A
Definitely
and
I
think
it's
useful
to
have
calls
about
these
kinds
of
things,
because
there's
some
things
where
I
don't
know
how
on
earth
we
would
have
got
there
in
a
written
form
in
height
inside
of
half
an
hour.
So
I
know
that
from
an
async
perspective,
it's
good
because
people
can
can
be
part
of
the
conversation,
but
sometimes
it's
just
useful
to
have
a
chat,
and
it
pushes
it
over
that
boundary
that
that
needs
to
happen.
Sometimes.