►
From YouTube: 3/16 hyper63 TSC Meeting
Description
hyper63 technical steering committee meeting
* Discussion RFC 008 Queue Service
https://github.com/hyper63/hyper63/discussions/161
A
All
right
welcome
to
the
hyper
63
technical
steering
committee
meeting
it's
march
16th.
Everyone
needs
to
remember
to
wear
green
tomorrow,
being
risk
of
social
distancing,
fouls.
A
A
One
announcement
is
the
open
source
101
conference
on
march
30th.
It's
free
and
definitely
recommend,
if
you
can
to
sign
up
on
on
that,
I
think
there
will
be
some
good
good
talks.
A
But
it's
virtual,
I
like
everything,
so
you
know,
but
I
think
the
talks
will
be
good.
I'm
really
looking
forward
to
jeff
atwood,
who
maintained
or
used
to
maintain
stack
overflow
and
had
the
blog
coding,
horror,
good.
Okay!
Sorry,
if
I
locked
up.
A
Yeah
he's
the
maintainer
of
babel
and
yeah
ben
is
well
known
in
the
react
community.
I
think
there's
some
good
good
speakers.
There
definitely
something
worth
picking
up.
A
Cool
and
big
shout
out
to
tyler
and
tripp
for
really
asking
some
tough
questions
on
this
rfc
and
I'm
gonna
go
ahead
and
jump
into
that,
and
then
I
know
casey's
gonna
is
working
on
a
auth
service
and
he
may
may
have
some
questions
or
comments.
A
A
It's
doing
pretty
good
and
a
lot
of
tweaks
to
search
and
and
that
that
should
be
working
pretty
well
too.
Of
course,
data's
working
good,
but
but
the
upcoming
service,
that's
kind
of
being
requested
or
or
is
requested,
is
a
queue
service,
and
what
I'll
do
is
I'll?
Just
share
my
screen
with
the
rfc.
A
I
was
using
the
elasticsearch
adapter
for
for
a
search
and
mini
search,
but
both
of
those
are
up
and
then
also
using
kendra,
but
but
I'm
it's
not
an
adapter
yet
so
so
I'm
using
kendra
on
a
project,
but
it's
not
an
adapter
but
but
elasticsearch
is
working
well.
B
A
Yep
sounds
good
yeah.
I
think
the
the
one
thing
if
memory
serves
me
is
the
filter
being
able
to
filter
on
attributes
with
elasticsearch,
and
I
think
that
bug's
filed
but
I'll
I'll
tag
you
on
it.
A
Okay,
so
again
thanks
tripp
and
tyler
for
taking
a
look
at
this
on
friday.
Just
some
background
on
the
q
service,
the
the
q
service
is
kind
of
focused
at
this
point,
but
basically
one
of
the
projects
that
I've
been
working
on
we
were
building
the
api
in
a
server-less
environment
using
versel
and
part
of
the
api
is
to
run
through
a
rules
engine
and
then
return
some
results
back
from
that
rules.
Engine
that
are
matches
right
there
there
are
matches
for
the
submitter
to
review.
A
Then
at
you
know,
kind
of
an
asynchronous
point
of
view.
There's
a
hooks
interface
where
all
of
the
entities
or
partners
that
have
been
matched
need
to
get
notified
via
some.
You
know
kind
of
web
hook
to
to
like
salesforce
or
something
like
that.
So
so
it
kind
of
generates
a
lead
or
a
message
to
them,
and
so
the
the
api
that
sits
in
front
of
hyper
has
this
hooks
interface
on
serverless,
which
was
working
great
except
for
serverless.
Once
you
return
your
response
back
to
the
client,
it
kills
the
process.
A
So
you
can't
run
anything
after
that,
and
that's
where
this
challenge
kind
of
came
about
is
the
fact
that
that
there
may
be.
A
You
know
at
least
five
kind
of
hooks
set
five
to
six
kind
of
hook
set
when
this
rules,
engine
processes
and
returns
result.
It
needs
to
put
a
message
somewhere.
That
then
gets
invoked
and
actually
five
messages.
If
you
will
that
get
invoked
at
a
later
time,
and
so
that
the
you
know
request
can
can
not
be
a
long
running
request
when
the
rules
engine
runs,
it
returns
back
and
then
those
five
or
six
requests
can
happen
in
a
background
kind
of
process.
A
So
so
I
kind
of
just
put
up
there,
based
on
some
of
the
the
feedback
that
you
know
we're
just
we're
not
trying
to
be
the
you
know,
push
pull
pub,
sub,
end-all,
be-all
kind
of
queue,
service
really
focused
on
kind
of
a
worker
q
service
or
some
people
call
it
a
push.
Q
service,
but
one
of
the
things
is
is
that
we
want
to
use
hyper
to
abstract
all
the
complexity
away.
A
So
it's
it's.
You
know
they
don't
have
to
manage
servers,
they
don't
have
to
think
about
redis
and
think
about.
You
know
I
need
to
install
workers
and
manage
that
etc.
They
really
just
have
a
rest
interface
and,
and
then
the
the
other
side
is,
is
for
right.
Now
it's
mainly
focused
around
short
tasks.
A
You
know
like
sub
three
minutes,
not
long
running
tasks
and
and
then
the
last
thing
is,
is
it's
not
a
scheduling
service?
So
it's
really
just
focused
on
you
know,
handling
kind
of
these
background,
fetch
or
api
calls.
If
you
will
and
then
I
kind
of
broke
down
some
some
background,
you
know
what
is
a
cue
feel
free
to
edit
any
of
that
kind
of
stuff.
A
What's
a
task
or
a
job,
and
basically
I
kind
of
use
task
and
job
interchangeably,
but
but
basically
it's
a
it's
a
document
that
contains
instructions
for
some
process
to
use
to
do
something
it
could
be
send
an
email
could
send
a
text
it
could
be
to
make
a
web
service
call,
etc.
It's
just
a
a
json
document
and
then
kind
of
go
in
and
explain
what
is
a
work
or
task
queue
and
and
basically
the
the
best
way.
I
can
explain
it
is
in
any
queue
process.
A
The
producer
creates
the
job
which
pushes
it
on
a
queue,
and
there
could
be
one
to
many
consumers,
but
the
broker
will
only
assign
one
job
to
a
consumer
and
then
that
consumer
will
tell
the
broker
when
it's
done,
and
then
it
can
receive
another
job.
So
so
there
will
never
be
like
that.
One
consumer
receiving
a
multiple
jobs
concurrently,
but
you
could
have
10
consumers
running
and
then
the
broker
could
take
10
jobs
and
submit
those
to
those
10
consumers.
If
they
were
all
available.
A
A
Sqs
exactly
yep
yeah,
just
just
super
simple:
there
is
some
built-in
semantics
like
if,
for
whatever
reason,
the
the
job
or
task
times
out,
then
there
would
be
a
set
amount
of
retries,
so
it
would
retry,
let's
say
five
times
and
if
on
the
fifth
time
it
doesn't,
it
would
go
into.
A
So
I'm
gonna
skip
down
to
the
diagram,
because
that's
really
an
easier
thing
to
talk
about,
but
basically
the
the
kind
of
minimum
viable
commands
for
this
is
is
one
a
create
queue
command
which
I'm
trying
to
follow
the
same
pattern
of
all
the
services,
and
I
I
had
a
thought
to
do
an
experiment,
but
I
really
think
that
I
can
take
that
pattern
and
and
actually
abstract
a
lot
of
the
commands
in
the
core
module
and
the
express,
app
module
and
whittle
those
down
because
they
really
are
doing
the
same
thing.
A
A
A
A
A
So
when
a
job
gets
posted,
there
will
be
a
a
worker
running
inside
hyper
63
and
that
worker
will
use
this
url
to
send
the
job
to
to
the
outside
world
and
then
the
secret
and
this
secret
could
be
optional.
But
basically
what
it
would
do
is
create
a
json
web
token
using
that
secret,
so
that
when
the
recipient
of
this
job
they
can
verify
the
token
to
make
sure
that
this
job
actually
came
from
hyper
63..
A
So,
and
I
think
trep
asked
a
really
good
question:
is
the
secret
going
to
be
logged
are
stored
on
a
database
and
the
answer
is
no,
it
won't
be
logged,
but
yes,
it'll
be
stored
in
a
database.
A
You
know
kind
of
using
the
same
standards
that
you
would
store
a
password.
I
forget
the
the
name:
it's
like
p
b
g
k,
encryption
or
whatever
two
I
forget.
I
think
I've
got
it
down
here
in
a
note
somewhere.
A
Well,
I
can't
find
it,
but,
but
basically
it's
going
to
use
an
encrypted
field
that
matches
the
current
standards
of
encrypting
encryption
for
for
a
password.
So
so
when,
when
the
queue
is
created,
that's
kind
of
a
one-time
thing
and
it's
also
item
potent.
So
what
that
means
is
the
client
can
call
it
many
times
and
as
long
as
it's
the
exact
same
name,
it's
just
going
to
return.
Okay,
it's
going
to
return!
Okay,
it's
like!
A
And
and
that'll
just
allow
allow
the
client
to
create
an
initialization
event,
so
every
time
they
deploy,
they
can
just
re-initialize
the
service,
and
it's
all
good.
A
Then
the
second
command
is
post
job,
so
the
client
will
post
to
the
queue
and
in
that
post,
job
they're
going
to
send
their
data.
So
so
there's
no
really
at
this
point
required
attributes
there.
All
of
the
required
attributes
like
a
job
id
and
all
of
those
kinds
of
things
will
be
kind
of
behind
the
scenes.
A
Just
because
each
kind
of
q
service
could
have
different
semantics
and
we
want
to
abstract
those
away
from
the
client.
They
should
just
post
the
data
that
they
want
to
to
work
on,
and
then
we
can
deal
with
the
rest
of
the
documents
that
that's
required,
whether
it's
rabbit,
mq
or
sidekick,
or
celery
or
bq.
B
Sense
so
quick
question
as
far
as
like
the
system
of
record
for
the
jobs
like
the
like
the
queue
is
the
idea
here
to
just
like
build
adapters
around,
like
let's
say
like
sqs.
That's
like
a
common
one
to
where
like
hyper
63
is,
is
hyper.
63
meant
to
be
the
system
of
record
for
the
for
the
pending
jobs,
or
it's
just
purely
just
an
abstraction
over
something
like
sqs.
A
Yeah,
I
think
each
adapter
would
be
able
to
make
that
decision
right.
So,
for
example,
the
the
bq
adapter,
the
system
of
record
would
be
redis
and
the
q
service
kind
of
sits
on
top
of
redis
and
then
hyper
63
is
just
really
just
keeping
the
abstraction,
simple
and
also
running
a
worker
or
one
or
more
workers
right
in
in
the
background
so
and
I'll
try
to
explain
that
a
little
bit
more
but
then
with
sqs.
A
It's
kind
of
the
same
thing
sqs
would
be
the
system
of
record,
but
hyper
63
would
also.
You
know
it's
kind
of
running
this
api
right.
It's
providing
this
api,
but
with
the
q
service,
it
will
also
have
to
launch
a
set
of
processes
that
are
essentially
workers
that
are
listening
to
the
queue.
A
Exactly
jobs
exactly
they're
just
they're,
they
are
they're
proxy
workers
right
that
they
they
listen
within
the
boundary
of
hyper
63,
so
that
the
client
doesn't
have
to
have
direct
exposure
to
the
queue
itself.
A
Those
work
workers,
kind
of
listen
within
the
boundary
and
all
they
do
is
when
a
job
appears
on
the
queue
they
just
dispatch
that
job
so
so
that
worker
will
call
the
worker
url
generate
the
the
token
and
send
the
data
literally
back
to
the
api
or
wherever
the
api
told
them,
that
those
jobs
go
and,
and
then
it'll
be
up
to
that
api
or
that
target.
A
They
have
essentially
three
minutes
to
do
their
work
and
then
they
need
to
respond
back
with
either
a
200
or
or
whatever
the
success
code
is
or
an
error
code,
or
they
will
get
a
timeout
so
so
really
kind
of
the
the
abstraction
is
saving
the
the
api
team
from
having
to
manage
these
workers
right
and
we
could
actually,
you
know
in
another
iteration
say
when
you
put
the
queue
you
could
specify
how
many
workers
you
want
loaded
right.
A
I
want
to
run
five
workers
or
I
want
to
run
10
workers
etc,
and
then
we
could
kind
of
spin
those
up
based
on
on
that,
but
but
for
now,
probably
we'll
just
run
one
worker
and
or
maybe
two
and
monitor
that.
A
But
when
the
the
client
api
kind
of
dispatches
our
post
to
the
queue
it
will
dispatch
the
job,
which
will
basically
be
this
process
proxy
worker
process
that
will
post
that
back
to
the
client
api
and
then
the
rest
of
the
commands
are
kind
of
management
commands.
A
So
you
know,
for
whatever
reason
there
may
be
unprocessed
jobs,
so
they
could
call
that
and
say.
Let
me
find
all
the
jobs
that
haven't
been
processed
yet
or
there
could
be
dead
jobs
or
there
could
be
unprocessed
or
or
I
might
want
to
cancel
an
unprocessed
job
right.
So,
for
whatever
reason
you
know,
I
was
supposed
to
send
five
of
these
things
out
and
the
user
doesn't
want
to
send
any
more
out.
A
A
A
So
that's
why
I
opted
for
kind
of
the
rest
operation,
semantics
and
I
don't
know
if
it's
a
standard
unrest,
but
I
have
seen
it
out
in
the
wild
with
fire
and
couchdb,
so
it
is
used
to
deal
with
operations
that
don't
fit
the
crud
process
and-
and
I
actually
use
it
to
do
like
query,
commands
and
and
other
operations,
I'm
cool
with
it.
Okay,.
C
B
So
a
quick
question
just
so
I
understand
yeah
the
flow,
so
I
create
a
job
so
say:
I'm
using
this
q
service.
It's
wrapping
like
sqs.
D
B
I
go
and
create
a
queue
and
hyper
63
and
then,
which
is
probably
just
like
a
one-to-one
with
like.
Like
a
like
a
topic
right.
I
create
a
job
which
gets
dispatched
to
a
worker
in
the
hyper
63
world
and
that
hyper
63
worker
will
then
post
that
job
onto
sqs.
B
A
And,
and
so
from
the
api
client
perspective,
you
know
basically
the
client
would,
you
know,
create
an
endpoint
to
kind
of
receive
that
worker
do
the
work
and
then
return
back.
So
so
it's
it's
kind
of
weird.
A
B
A
A
B
And
so,
if
I
wanted
to
okay,
so
that
sort
of
lends
itself
to
the
question
I
had
earlier
around
like
what
the
system
of
record
is.
So
if
I
wanted
to
cancel
an
unprocessed
job,
it
would
be
hyper
63's
job
to
not
to
overload
the
term
job.
It
would
be
hyper
63's
responsibility
to
find
that
job
on
whatever
the
system
of
record
is
and
get
rid
of
it
before
a
worker
picks
it
up.
A
B
A
Well,
unless
it's,
unless
it's
in
the
dead
job
queue.
A
A
B
Yeah,
it's
it's
cool.
It's
interesting.
I
mean
with
this
scenario
the
client
could
decide
what
sort
of
schema
they
use
for
like
job
payloads
correct.
So
I
don't
I
don't.
I
don't
know
why
you
would
do
this,
but
perhaps
you
have
a
use
case
for
it
where
your
job
handlers
like
a
graphql
server,
you
could
create
jobs
where
the
payload
is
a
graphql
mutation,
then
hyper
63
would
just
post
that
to
the
endpoint
that
you
provide
when
you
created
the
queue
which
would
just
be
a
graphql
server,
and
that
would
be
so.
A
Yeah,
absolutely
there
should
not
be
a
problem
with
that
and
there's
nothing.
You
know
nothing
saying
that
that
your
endpoint
has
to
be
the
client
api
right.
It
could
be
twilio
or
it
could
be
mel
gun
right.
So
you
could
just
create
your
cue
and
say
you
know,
here's
the
stuff.
You
need
to
send
to
mel
gun.
A
You
might
want
to
have
something
in
between
so
that
you
know
you
put
your
formatters
and
things
like
that
in
a
good
direction,
but
or
zapier
zapier
may
be
a
a
good
one
to
to
to
think
about.
Let's
say
you
needed
to
send.
A
Sin
every
time
someone
quote
logged
in
and
said
you
know,
keep
me
informed.
B
A
B
Have
you
have
you
given
any
thought
to
how
this
would
pun
unintended,
but
hook
into
the
hooks.
A
Yeah
so
so
currently
the
hooks
port
is
is
kind
of
built
in
at
the
config
level.
Now
so
so
you
in
your
configuration,
you
add
hooks
and
you
add
your
configuration
for
the
hooks
at
that
point.
A
We
talked
about
scoping
it
up
at
a
higher
level
and
have
like
a
hooks
crud
interface,
and
I
think
yeah
other
than
it
will
be
interesting
to
see
when
we
have
kind
of
both
those
pieces
how
how
they
can
interact.
But
but
I
I
haven't,
given
it
a
lot
of
thought
other
than
the
fact
that
users
of
hyper
63
could
use
this
q
service
to
push
messages
in
a
century
or
or
some
other
service,
mainly
for
tracing
their
apis.
A
A
And
so
then,
all
the
events
that
are
hooked
in
the
data
service
would
be
passed
to
the
particular
queue
that
they
gave
the
name
of
century
or
whatever.
Does
that
make
sense?.
A
B
B
Would
this
be
sort
of
the
first
case
where
hyper
63
is
having
to
sort
of
maintain
some
of
its
own
state
other
than
like
config
for
other
services
and
like
connecting
to
other
services
that
it's
like
via
adapters?
I
think
this
is
the
first
instance
of
that,
unless
I'm
not
understandable,.
A
Yeah,
I
I
I
you
know
there.
There
will
be
adapters,
so
so
there's
bq,
which
would
be
the
the
first
adapter
that
that's
basically
a
small
node
module.
That's
wrapped
around
redis.
A
The
the
loading
of
the
worker
processes
right
so,
but
but
I
think
it
can
just
be
just
like
in
a
in
a
docker
compose
where
you
know
the
worker
process
is
pretty
stateless.
It
just.
B
D
Probably
not
any
good
ones
so
originally,
I
guess
my
main
question
is
less
the
flow
of
it.
I
think
that
kind
of
makes
sense
more
like
what
who
is
using
this.
What's
the
use
case
for
this
specifically,
if
I'm
using
hyper
63.
A
A
But
basically
one
one
of
my
clients
working
with
it's
building
a
rules
engine
and
you
can
think
of
a
web
form
out
there.
That
is
asking
for
some
information.
A
Let's
say,
let's
say
that
they're
registering
for
an
event
right,
so
a
user
is
filling
out
this
form
and
based
on
what
they
fill
in
the
rules.
Engine
is
going
to
assign
them
different
presentations
to
go
through,
let's
say,
create
kind
of
a
zoom
schedule.
A
But
behind
the
scenes
this
workshop
or
or
this
event
is
working
with
partners,
sponsors
and
those
sponsors
want
anyone
that
registers
for
one
of
their
workshops
in
their
salesforce
tool
right.
A
A
So
what
will
happen
is
when
they
submit
the
form
it
will
run
through
the
rules
and
assign
them
to
the
event
and
then
respond
back
to
the
user
and
says
hey
you're,
registered
here's
your
schedule.
Thank
you
for
choosing
to
attend
this
conference
and
then
behind
the
scenes
for
each
one
of
those
sponsors.
A
D
Yeah
and
the
queuing
part
is
just
so:
both
you
don't
have
to
wait,
but
also
so
that
you
know
you
have
an
elegant
way
to
check
if
something
fails,
not
just
a
post
into
an
abyss
correct
exactly
so.
A
So
you
know,
let's
say
you're
wiring
up
salesforce
through
something
like
zapier
or
some
other
service,
and
you
know,
for
whatever
reason:
salesforce
isn't
available
for
this
client
or
are
some
the
other
targets
not
available.
A
A
F
A
A
No,
no
please
tear
it
up,
but
but
really
the
the
goal
is
is
if,
if
we,
if
we
can
build
a
service
where
the
client
team
doesn't
have
to
know
what
sqs
is
and
know
what
an
amazon
key
and
secret
is
or
what
google
is
and
what
you
know
metadata
is
there
then,
basically
all
they
have
to
do
is
know
how
to
use
json
and
know
how
to
use
http
and
know
how
to
use
jwt
or
web
tokens
that
then
they
should
be
good
to
go
and
that
that's
kind
of
the
goal
so
they're
getting
a
q
service
with
really
not.
A
A
C
And
validate
what
was
published
to
you
is
legitimate
and
validate
what's
published
to
you,
but
that
that
is
that
that
pattern
can
be
covered
with
the
sav
of
documentation
and
resources.
This
is
how
you
build
a
web
hook.
A
Yeah
yeah
and
that's
a
good
segue
on
the
client,
so
looking
at
some
other
similar
services-
and
you
know
one
one
service
in
particular
basically
came
up
and
said:
you
know
what
we're
deprecating
our
client,
here's
our
rest
api
and
just
use
it
and
I'm
curious
to
see
what
your
thoughts
are.
I
mean
I
created
a
client
and
it's
very
opinionated
in
terms
of
functional
programming,
but
you
know
I.
A
B
B
I
mean
I
do
think
there
will
be
some
really
common
patterns
that
emerge
out
of
how
users
consume
the
different
ports
like.
I
would
send
some
data.
I
also
want
to
cache
it,
and
I
also
want
to
index
it,
and
that
seems
like
such
a
such
a
common
pattern
and
I
think
folks
might
want
to
have
something
that
they
can
just
plug
and
play.
D
B
And
there's
like
a
bunch
of
utilities
that
users
like
just
create
in
the
open
source
domain,
and
I'm
thinking
I
don't
know
you-
have
like
a
store,
search
and
cache
like
sort
of
utility
and
then
you
can
compose
that
with
a
bunch
of
other
utilities
and
you
kind
of
build
your
own
clients
but
they'd
be
sort
of
like
not
fully
featured
they'd,
be
like
you'd.
Do
one
specific
thing.
E
A
B
I
think
we
could
have.
Maybe
we
could
have
some
sort
of
instead
of
building
a
client.
We
build
one
of
those
plug
and
play
things
and
it
implements
an
api
and
then
just
say
hey.
If
you
want
to
do
this
sort
of
thing
like
just
implement
this
api
and
then
use
a
tool
like
ramda
to
compose
them
all
together
or
like,
and
then
all
we
have
to
maintain.
B
A
Yep,
no,
I
I
like
it.
I
my
mind,
immediately
jumps
to
like
a
a
wizard
screen
where
you
can
check
the
boxes,
that
you
want
and
say
generate
client.
A
It
yeah
compose
something
for
you.
E
B
Yeah,
I
mean,
because
I
mean
I
guess,
it'd
be
very
similar
to
how
our
adapters
work
as
well,
because
when
I
was
putting
together
the
pattern
for
how
to
compose
adapters
like
the
first
place,
I
looked
was
like
how
roll-up
does
it
and
that's
where
I
got
the
idea
for
sort
of
the
life
cycle
for
the
adapters
or
the
plug-ins.
I
should
call
them,
and
so
I
figure
building
a
client
would
work
very
in
a
very
similar
way
and
it
would
prevent
us
from
having
to
maintain
our
own
client
in
a
bunch
of
different
languages.
A
B
Sorry,
I
I
digress,
but
I
sympathize
with
the
idea
of
not
maintaining
a
client,
but
I
do
think
there
will
be
some
common
patterns
that
emerge
that
if
we
had
something
that
allowed
users
to
just
leverage
that
pattern
without
a
whole
lot
of
lifting,
I
think
that
would
help
with.
A
A
All
right,
well,
is
there
anything
else,
thanks
for
the
awesome
discussion
and
all
the
great
questions
feel
free
to
go
into
that
document
and
play
devil's
advocate
and
blow
it
up,
but
I
really
appreciate
all
the
thought
through
this
I
don't
know.
I
think
it
might
be
a
cool
service.