►
From YouTube: JSF Architect: Lambda on Easy Mode
Description
JSF Architect: Lambda on Easy Mode- Brian LeRoux, Begin.com
AWS Lambda enables developers to focus entirely on their application logic free of infrastructure concerns. Lambda functions can be developed in isolation, deployed in seconds, immediately available with zero downtime, and are (at least theoretically) endlessly scalable. If these properties aren't enough to excite developers the cost is also remarkably cheap. 1 million invocations a month are free.
About
Brian LeRoux
Brian is a former member of the Adobe PhoneGap team and helped to foster the Apache Cordova project. He is also responsible for wtfjs. Lately he has been building begin.com with AWS infra using arc.codes
A
A
Alright,
let's
get
into
it,
you
probably
want
lunch.
My
name
is
Brian
I
split
my
time
between
here
and
San,
Francisco,
co-founder
of
a
company
called
begin.
I
have
probably
a
little
bit
more
notoriety
for
working
on
a
thing
called
PhoneGap,
but
I
don't
do
that
anymore.
These
days,
I'm
doing
server
list
stuff,
and
this
is
one
of
my
favorite
quotes
as
it
pertains
to
software
development,
because
I
think
it's
super
true,
especially
when
you
think
about
your
your
dependency
tree.
So
I.
A
Used
to
do
this,
I
used
to
Rack
physical
servers
in
a
different
time,
and
you
know
we
would
get
a
box
and
we
would
open
that
box
up
and
then
we
would
like
shove.
This
thing
on
a
rack
and
then
we'd
plug
a
bunch
of
cables
in
we
wouldn't
label
anything
and
then
we'd
need
to
scale
so
we'd
get
another
box
plug
that
in
and
that
worked
for
a
really
long
time
and
then
virtual
machines
kind
of
became
a
thing,
and
this
rapidly
changed
how
we
were
able
to
provision
stuff.
A
We
were
able
to
get
ourselves
into
more
problems
way
faster
than
before,
and
that
was
pretty
great,
but
virtual
machines
are
kind
of
slow,
so
physical
machines
didn't
scale
very
well
horizontally,
or
rather
vertically,
so
we
started
to
use
lots
of
commodity
servers
and
virtual
machines
kind
of
became
their
their
own
special
help.
And
very
recently
you
have
the
concept
of
containers,
which
you
are
all
probably
familiar
with
docker
and
kubernetes,
and
they
give
you
the
fast
startup
time
they
kind
of
fix
some
of
the
problems
that
we
had
dealing
with
virtual
machines.
A
There
appears
to
be
an
ever
tightening
cycle
of
how
we
deploy
stuff
and
cloud
functions.
Are
the
latest
iteration
of
this
idea
and
if
you
take
anything
from
this
talk
today,
I
want
you
to
think
about
how
cloud
functions
are
totally
different
metaphor
than
a
physical
server.
So
we've
been
doing
this
a
while
a
long
while
we
just
started
doing
this,
and
I
would
like
to
propose
that
we
still
don't
know
what
the
we're
doing
but
we're
getting
better
at
it,
and
it's
created
its
own
problems,
because
it's
a
different
type
of
solution.
A
The
nice
thing
about
cloud
functions.
You
can
deploy
them
really
quickly.
Another
nice
thing
about
cloud
functions
like
lambda
is
that
they're
effectively
free
you
get
million
executions
a
month
for
free
the
second
million
executions
is
a
dime,
so
if
you're
doing
10
million
executions
a
month,
it's
90
cents.
A
So
that's
those
are
pretty
good
unit
economics,
but
it's
not
to
say
that
that
free
is
so
interesting
or
not
as
interesting
to
me.
But
the
isolation
is
interesting.
So,
instead
of
your
application
having
a
single
monolithic
ball
that
can
break
you
split
things
up
into
these
individual
little
functions,
and
you
have
this
high
degree
of
isolation.
So
now,
when
you
do
a
change
to
one
part
of
your
app,
the
other
art
part
of
the
app
doesn't
fall
down.
So
cloud
functions
are
totally
different.
A
Metaphor,
that's
interesting
and
we
may
be
stupidly,
possibly
at
the
risk
of
my
own
startup
decided
that
this
is
where
things
are
gonna
go,
so
we're
going
there
right
now,
and
this
was
about
a
year
and
a
half
ago
that
I
decided
we
would
move
our
entire
infrastructure
over
because
we're
building
a
bot
on
slack,
and
we
think
we
think
that
this
thing
is
really
well-suited
to
that
type
of
thing.
As
we
we
did
the
thing
that
you
would
always
do
so
when
we
started
building
our
stuff.
A
We
built
a
web
server
and
we
deployed
that
in
a
single
cloud
function
and
it
worked
really
great
like
really
well
actually
until
our
function
started
to
actually
have
some
logic
beyond
hello
world,
so
lambda
functions
in
general
and
functions
as
they
get
bigger.
They
start
to
exhibit
properties
that
are
not
optimal
there.
They
have
cold
start
times
that
are
really
slow.
A
They
become
difficult
to
deploy
because
they
get
really
big
and
it's
it's
just
not
the
way
that
it
should
be
built.
So
we
started
to
separate
our
functions
out
by
you
know
basically
route
and
things
got
really
good,
but
one
of
our
problems
is
that
things
got
really
complex
and
so
now
for
every
route
we
had
a
function
which
you
you
do
logically
in
your
architecture,
but
that
means
that
for
every
route
in
our
application
we
also
had
to
deploy
them
individually.
A
We
had
to
set
them
up
individually
and
we
had
to
add
environment
variables
to
each
one
individually
and
it
got
really
difficult
to
manage
and
when
things
get,
you
know
difficult
to
manage
when
you're
doing
it
manually,
you
end
up
with
checklists
things,
get
out
of
sync:
it
gets
really
hard
to
reproduce
your
environments
bugs
happen.
People
get
mad,
you're
up
all
night,
trying
to
figure
out
what
went
wrong.
Another
big
issue
going
this
way
when
we
came
into
the
room.
A
We've
I've
been
working
with
Amazon
for
a
very
long
time,
so
this
view
of
AWS
does
not
terrify
me
as
much
as
it
should.
I
totally
don't
know
what
half
of
these
things
do.
I
think
most
people
are
the
same
way
when
you,
when
you
go
and
land
on
AWS
you're,
like
okay
cool,
let's
go,
you
know,
build
something
I.
A
Don't
know
you
know,
there's
just
a
lot
of
stuff
and
that
cognitive
overhead
comes
with
its
own
cost.
You
can't
really
know
what
you're
gonna
build
until
you
know
how
to
build
it,
which
would
you
know
kind
of
lead
you
to
the
thinking
that
you
would
have
to
master
this
whole
thing
to
build
something,
and
that's
that's
really
not
the
case,
so
we
solved
most
of
these
problems
in
the
last
generation
of
metaphors
with
servers,
and
we
called
that
thing.
Infrastructure
is
code
and
the
manifestation
of
infrastructure
of
code
effectively.
A
Is
that
you,
you
check
in
a
manifest
file
into
your
repo
and
you
conversion
your
infrastructure
in
that
manifest
file.
You
run
some
commands
against
it
in
it.
It
does
its
thing,
there's
a
whole
bunch
of
tools
that
do
this,
but
they
all
basically
work
the
same
way.
So
you
have
this
manifest
file
in
the
root
of
your
repo.
You
know
it
can
be
an
ansible,
it
could
be
docx
or
it
could
be
whatever,
and
then
you
have
some
kind
of
global
CLI
binary
that
you
run
against
it
to
to
recreate
your
business.
A
These
things
did
not
exist
when
we
started
and
had
they
I
may
not
be
have
given
this
talk
right
now,
but
these
things
exist
now
and
so
I
think
it's
important
that
you
know
that
these
things
are
ways
to
to
do.
Infrastructures
with
Cobb
functions
and
I
think
this
will
also
help.
You
understand
why
architect
from
JSF
is
on
possibly
a
nicer
way
to
work.
So
terraform
is
from
a
company
called
hasha,
corp,
they're
famous
for
a
thing
called
bay
grant,
and
it's
getting
quite
a
bit
of
popularity.
It's
a
manifest
file.
A
You
run
against
it.
It
creates
some
provisions
infrastructure
for
you
on
the
cloud
server
this
framework,
the
capital
S
service
framework,
is
a
venture
back
company.
They
have
a
manifest
file.
You
run
some
stuff
against
it.
It
generates
infrastructure,
Amazon
didn't
want
to
be
left
out
of
a
party,
so
they
created
their
own
thing.
They
call
it
Sam,
which
stands
for
a
serverless
application
model
yeah,
and
it
does
the
same
thing.
So,
let's
take
a
look
at
these
manifest
files.
Terraform
create
a
thing
called
HCl
I
assume
that's
because
they're
sponsoring
companies
hasher
Corp.
A
A
It's
not
too
bad
I
think
once
you
understand
what
this
is
doing,
it's
not
not
awful,
and
if
you
check
this
into
the
root
of
your
repo,
anyone
on
your
team
would
be
able
to
create
that
lambda
function
and
you'd
be
you'd,
be
off
to
the
races.
I,
don't
think,
you'd
write
this
by
hand,
I,
don't
think
you
would
write
this
from
memory.
You'd
probably
be
looking
stuff
up
and
copy
and
pasting,
but
that's
okay,
it's
it's!
It's
a
way
of
doing
things.
A
Server
lists
I
think
took
inspiration
from
ansible
cuz.
They
decided
to
go
with
llamo
jamol's,
fine,
it's
another
format.
It
has
significant
whitespace,
which
I
don't
particularly
love,
because
if
you're
missing
one
of
these
spaces,
you're
and
you'll
never
find
that
bug.
But
you
know
so.
This
provision
is
a
couple
of
lambdas
in
a
dynamodb
table.
Again
there
is
a
great
deal
of
inside
knowledge
required
to
write
this
file.
You
would
not
write
this
file
by
hand.
Probably,
but
you
know
it's
powerful.
This
idea
is
really
great.
A
A
Amazon
didn't
want
to
be
left
out
of
a
party
and
they
really
like
the
term
service
I.
Don't
think
they
liked
that
serverless
got
a
trademark
for
service,
but
so
they
created
Sam
and
it's
based
on
their
internal
infrastructures
code,
which
is
called
cloud
formation.
I,
actually
really
like
cloud
formation,
I
think
it's
a
brilliant
product.
A
It
to
me
looks
a
lot
like
serverless,
except
for
it's
their
service,
not
services,
service
services,
services,
Connie
cuz.
It
can
run
on
multiple
clouds.
Obviously
awsm
only
runs
on
Amazon's.
The
other
comments
are
you
know.
Problems
I
have
with
this
format
are
similar
like
you
have
to.
You
have
to
know
a
lot
of
stuff
to
be
able
to
write
this
file,
and
you
probably
won't
know
that
stuff
off
the
top
of
your
head,
and
you
know
if
it's
missing
a
space
here.
Good
luck,
finding
it.
A
So,
to
summarize
my
exceptions
to
these
assertions,
I
think
you
need
deep
proprietary
knowledge
to
understand
how
to
configure
and
set
up
really
basic
stuff
and
that's
okay.
But
it's
a
thing:
I'm,
not
a
huge
fan
of
hand,
authoring,
Yambol
or
JSON
I
think
some
people
might
be,
and
that's,
okay,
it's
not
for
me.
It's
missing
comments.
You
know
it's
got
bad
bad
characteristics
for
bug
resolution.
If
you
have
an
extra
space
and
I
feel
like
most,
this
tooling
kind
of
smells
a
lot
like
servers
and
I'm
over
it.
We're
not
doing
that
metaphor
anymore.
A
We're
thinking
about
things
differently
now
functions
of
the
metaphor.
That's
so
I
kind
of
think,
though,
we're
actually
committing
AWS
infra
configuration
are
Canada.
Our
revision
control
systems,
which
is
kind
of
like
committing
a
build
artifact
into
your
into
your
revision
control
system,
and
we
already
know
that's
a
bad
idea,
so
we've
traded
one
set
of
problems
for
a
new
set
of
problems
and
they're,
not
bad
problems
but
they're
there.
A
So
what
architect
proposes
is
infrastructure,
as
code
was
a
really
great
metaphor
for
servers,
but
we
want
to
put
forward
the
idea
of
architecture
as
text
and
so
we've
defined
a
manifest
format.
That's
a
little
bit
more
chilled
out
the
manifest
file
called
dark.
So
it's
not
in
your
face.
It's
just
a
hidden
file
in
the
routier
repo,
and
it
looks
like
this
so
I'm
gonna
explain
these
little
bits
here.
A
A
So
if
we
write
NPM
run
create
against
a
tart
file,
our
get
index
will
generate
this
single
function
and
it'll
deploy
it
and
an
art
file
becomes
very
expressive.
After
a
little
bit
of
time,
if
I
run,
NPM
run
create
against
this
art
file,
it'll
generate
this
folder
structure,
which
anybody
can
understand.
What's
going
on
there
pretty
quickly,
you
can
see
the
HTTP
verb.
We
can
see.
You
know
what
content
type
it's
dealing
with.
A
So
let's
take
a
quick
look
at
the
generated
code.
Actually,
maybe
this
is
a
good
time
to
just
start
writing
some
code.
What
could
possibly
go
wrong
so
I'm
on
my
desktop?
Can
everybody
read
that
back
there,
yeah
you're,
good,
okay,
cool,
so
I'm
gonna
make
a
project,
call
it
like
what
what
and
I'm
gonna
NPM
and
knit
it
so
that
just
created
a
package.json
file
and
I'm
gonna,
install
architect,
workflows,
I'm
gonna,
save
that
arguably
I
could
save
this
as
a
dev
dependency.
But
it's
just
a
demo.
A
One
thing
to
note
arc
is
I
call
it
arc,
which
is
kind
of
weird,
maybe
but
arc
is
running
on
node
6.10
with
the
default
NPM
install
it's
not
because
I
like
that.
It's
because
that's
what
Amazon
forces
us
to
use
with
lambdas
and
I
am
disinterested
in
trying
to
repackage
node
inside
of
AWS.
So
it
takes
a
second
to
install
that
we're
there.
So
I'm
still
here,
nothing
there.
Just
some
packages
I'm
going
to
touch
an
arc
file
which
is
just
a
text
file
I'm,
gonna
edit.
A
Jump
over
to
my
package:
JSON
we'll
add
that
create
script,
so
this
is
kind
of
weird
people.
Don't
like
this
Amazon
uses
environment
variables
to
dictate
which
Amazon
account
you're
using
and
which
region
or
data
center
that
you
want
to
deploy
to
I
I
have
multiple
Amazon
accounts,
it's
kind
of
a
thing
that
happens
so
I'm,
just
gonna
use
my
personal
one,
I'm
gonna
add
another
script
here:
oops
deploy,
deploy.
A
Well,
let's
run
it
so
this
isn't
as
fast
as
I'd
like
it
to
be.
No,
that's
funny
I
created
this
this
morning,
so
some
of
these
tables
already
existed.
One
thing
to
note
about
arc:
if
something
exists,
it
just
skips,
so
it
never
deletes
anything.
And
so,
if
you
want
to
delete
infrastructure,
be
my
guest,
we're
not
gonna.
Do
that
programmatically
I,
don't
want
to
automate
destroyed
your
Amazon,
so
it
would
be
a
bad
thing.
A
A
This
is
maybe
not
the
most
popular
opinion,
but
I'm
a
huge
fan
of
Express,
so
we
cloned
Express's
API,
but
not
perfectly,
and
we
did
that
on
purpose.
So
you
have
it
a
request,
object
and
have
a
response
function
that
you
invoke
with
a
named
parameter
for
what
you
want
it
to
do.
That
way,
it's
kind
of
reacting
so
I'm
going
to
upgrade
my
function.
A
I'm
stoked
that
note
Interactive's
in
Vancouver
by
the
way
there
was
never
cool
conference
this
year
when
I
was
living
here
as
a
developer.
Okay,
so
no
picture
that
was
a
good
typo.
Eh
people
are
always
like
you're
Canadian
eh.
This
is
what
happens
so
I'm
gonna,
deploy
that
and
we're
done,
that's
kind
of
amazing
right,
like
we
just
stood
up
a
website
and
redeployed
to
it
in
how
long.
A
2
seconds,
that's
actually
kind
of
slow,
so
this
is
also
something
you
might
notice.
It's
neat
we
deployed
twice
so
arc.
It's
cheap
right.
It's
like
a
it's
like
a
buck
for
10
million
executions,
so
we
create
two
of
them.
Two
functions
for
every
logical
component
of
your
application,
one
for
production,
one
for
staging
and
the
deploy
process
is
completely
isolated.
So
I've
got
my
staging
lambda
here,
let's
upgrade
my
production
lambda.
Now
the
production,
lambda
I
decided
to
make
an
environment
variable,
we
say
arc,
deploy,
production
and
p.m.
A
A
A
So
amazon
doesn't
want
you
standing
up
and
tearing
down
api
is
every
second,
so
they
actually
rate
limit
you
to.
You
can
only
delete
one
api
a
minute.
This
has
turned
into
a
fairly
big
pain
in
my
ass
yeah,
so
it
just
it
takes
a
second
to
create
the
infrastructure
once
it's
created.
Obviously
the
subsequent
deploys
are
in
the
seconds
which
is
kind
of
nice,
but
this
parts
a
little
bit
painful.
So
great
we
deployed
and
now
I
go
to
my
staging
API
I
get
some
I
get
forbidden.
Json,
that's
funny!
A
A
And
this
is
kind
of
fun
arc.
Deploy,
does
not
get
slower,
as
you
add
stuff,
because
it
spawns
processes
and
doesn't
in
parallel,
so
you
can
have
as
many
lambdas
as
you
want.
The
deploy
still
gonna
be
a
couple
seconds
and
so
we're
life
with
our
with
our
API
endpoint
that
that's
a
pretty
that's
a
pretty
good
start
to
understanding
the
benefit.
This
is
a
infinitely
scalable
endpoint
that
will
never
go
away.
That
cost
me
less
than
a
dollar
a
month
for
10
million
executions.
A
A
Arc
also
supports
SNS
events,
so
SNS
events
are
effectively
pub/sub
lambdas.
These
are
great
for
doing
background
processing
or
you
know
things
things
that
you
want
to
do
that
might
take
a
little
while
and
you
still
want
to
get
back
to
the
user.
We
use
these
in
our
BOTS
pretty
extensively
for
begin
and
I.
Actually,
I,
don't
know
if
I
should
show
you
this
so
I'm
gonna
show
it
to
you.
Anyways.
A
So
this
is
the
secret
sauce
behind
our
bot.
We've
built
a
slack
bot
that
lets
you
do
tasking
inside
a
slack
and
I
think
this
is
kind
of
an
interesting
thing
about.
Our
can
interesting
side
effect
is
that
everybody
in
this
room
can
read
this
file
and
they
kind
of
understand
how
our
bot
works,
which
is
amazing,
I,
don't
think
you
would
get
that
same
property
out
of
reading
a
gigantic
yamo
file
or
a
gigantic
hhcl
file.
A
Yeah
so
events,
oh
yeah,
that's
why
I
went
over
here.
So
our
bot
is,
you
know
it's
a
real-time
bot,
so
it
deals
with
a
lot
of
events
and
we
found
that
we
actually
have
very
few
HTML
lambdai,
but
quite
a
few
event.
Lambdas.
We
were
very
happy
with
the
performance
there.
Our
bot
usually
responds
within
200
milliseconds.
A
We
also
do
a
fair
number
of
these
cron
jobs
or
daily
notification
type
things.
And
if
you
look
down
here,
you
can
see
them.
Cron
syntax
is
not
the
kindest,
but
this
effectively
would
generate
two
lambda
functions
that
would
run
on
that
schedule.
I
never
need
to
set
up
a
server
again
to
do
a
cron
job
which
opens
up
the
world
of
a
lot
of
possibilities.
I've
got
one
that
checks
hockey
scores
my
pool,
coincidentally,
it's
sad
because
I'm
looking
Oxfam
I'm
still
not
over
it
yeah.
We
also
do
triggers
I.
A
Remember
when
I
was
a
younger
program
or
database
triggers
were
bad
but
they're
back
and
they're
good,
because
you
don't
have
to
write
them
in
SQL.
You
can
now
write
them
in
JavaScript.
These
are
dynamodb
triggers,
and
so
now
anytime,
that
this
particular
table
and
unsubscribe
to
does
an
update.
I
can
do
something
like
back
it
up.
I
could
mutate.
The
data
I
could
ensure
some
kind
of
integrity.
A
When
we
added
a
road
to
our
accounts
table,
we
shoot
off
an
email
using
a
trigger
to
say,
welcome,
so
there
they're,
really
useful
they're,
really
handy
and
that's
actually
about
it
for
arc.
So
you
declaratively
define
your
your
architecture,
high
level
primitives
in
an
arc
file,
not
any
AWS
configuration
Arkana,
just
plain
old
text,
and
anybody
can
read
theirs,
workflows
that
are
NPM
script,
that
are
local
to
your
code
and
do
all
the
configuration
and
provisioning
and
then,
of
course,
there's
the
cloud
function.
Wrappers
themselves.
Oh
yeah,
that's
right!
Hey
it's
funny!
A
Aang
added
this
later
so
NPM
start
you
can
run
offline.
This
was
a
issue
for
us.
You
know
we
were
when
we
initially
started
building
our
thing.
We
we
actually
were
deployed
so
fast.
We
didn't
really
miss
working
offline,
but
one
of
our
team
members
lives
in
Auckland
and
he's
got
to
take
the
BART,
which
is
stands
for
Bay
Area
Rapid
Transit.
It's
like
a
ironic
name
because
there's
nothing
rapid
about
it
but
anyways
he
kept
saying
dude
get
this
thing
running
offline,
so
we
did
I
use
no
DMV
for
this
pretty
extensively.
A
Cuz
arc
expects
it
and
you
still
set
up
your
profile
because
amazon
expects
it
and
it's
arks
sandbox.
Oh
it's
my
region,
broken
oh
yeah,
I,
don't
know
if
it
would
complain,
I
wonder
thanks
Chris,
so
now
NPM
start
will
run
our
application
offline.
In
theory.
You
could
use
this
to
run
this
thing
in
site
or
somewhere
else,
but
now
you
have
completely
negated
the
benefits
of
the
cloud,
so
I
don't
know
why
you
would
want
to
do
that.
A
But
if
you
did
want
to
do
it,
you
could-
and
you
can
you
know,
do
your
development
all
on
localhost
and
then
deploy
when
you
feel
like
it
or
when
you
get
off
the
BART
or
whatever,
so
that
works
that's
pretty
cool
by
default.
We
always
just
deploy
to
staging.
So
there's
no
fat
finger
into
production
by
a
mistake
and
then
Arctic
low
production
is
you
know
a
little
bit
tricky
to
do?
A
We
actually
don't
deploy
locally
very
often
now
most
of
this
stuff,
we
just
hid
behind
our
CI
system,
and
so
check-ins
are
just
constantly
deploying
to
staging,
and
then
when
we,
you
know,
feel
brave
enough,
we'll
promote
stuff
to
production,
which
is
usually
a
couple
times
a
day
and
that
that
actually
to
me
is
the
big
thing
about
cloud
functions.
Oh
it's,
not
the
cost.
It's
the
deploy.
Speed
is
the
big
deal.
A
rolling
deploy
of
servers
can
take
a
long
time,
especially
if
you
have
sticky
sessions.
A
A
A
Some
of
them
were
not
deploying
all
the
time.
Apparently
some
of
them.
We
are.
These
things
go
I,
don't
think
we
have
a
deployment
that
runs
any
longer
than
a
minute,
which
is
a
pretty
big
deal
compared
to
how
it
used
to
be
back
in
the
day.
Phonegap
build
by
a
comparison
would
take
hours
for
us
to
roll
a
deployment,
but
it
was
a
rails
app,
so
I
think
that's
how
they
work.
A
This
whole
thing
is
open
source,
so
we
built
this
with
our
company,
but
we
donate
it
to
the
j/s
foundation,
I
believe
in
open
source
governance
as
well
as
open
source
code.
So
anyone
can
contribute
to
this
and
anyone
can
modify
this
and
anyone
can
do
it.
They
want
with
this,
and
the
arc
file
format
is
pretty
simple,
so
you
could
extend
it
in
your
own
ways.
A
If
you
chose
to
and
the
parser
is
really
really
dead,
simple
yeah
before
arc,
it
used
to
be
kind
of
painful
and
after
arc,
we're
now
quite
happy
we're
really
enjoying
working
with
Amazon
and
AWS.
It's
a
bit
of
a
shift
in
how
you
think
you're
you're,
now
actually
architecting
functions
and
out
architecting
servers,
but
once
you
get
over
that
conceptual
hump
things
are
pretty
good.
You
can
find
all
this
stuff
at
this
suite
domain
name
arc
codes
and
thanks
for
having
me.
A
A
It
is
actually
yeah.
We
ended
up
even
after
me
complaining
about
it
constantly.
It
is
significant
in
one
case
in
that
case
is
tables.
Let's
see
if
I
can
find
an
example,
so
tables
take
two
indents
so
I
know
what
the
key
and/or
triggers
are,
but
otherwise
whitespace
is
totally
unimportant.
So
we
strip
it
all
my
strip
all
comments
when
we,
when
we
do
the
arc
file
pass
so
I,
guess
it's
not
a
binary,
yes
or
no.
It's
a
sorta,
it's
sort
of
important,
if
you,
if
you're
doing
tables
yes,.
A
A
Well,
I
don't
want
to
take
too
much
time,
but
after
this
I
could
demo
for
you
how
I
would
create
a
login
flow.
I
personally
wouldn't
use
a
incognito,
but
I
know
some
people
have
and
they
like
it
and
that's
cool.
We
were
trying
to
be
as
vanilla
as
possible
because
an
a
longer
term
goal
of
this
project
would
be
to
be
portable
across
clouds.
Once
you
use,
incognito
you're
baked
right
in
I.
Don't
think
that's
a
bad
thing
either
by
the
way.
Like
people
talk
about
this
cloud
lock-in
thing
and
it's
not
lock-in.
A
A
Yeah
it
runs,
it
runs.
So
the
question
is:
how
do
we
do
the
debugging
I?
Actually
don't
use
a
debugger
I
know
that
shocking
I
don't
and
it's
not
because
I
don't
have
bugs
I,
have
lots,
I,
write
tests
and
it's
how
I
learned
to
develop
my
workflow
I
know
that
if
this
is
offline,
that's
running
in
its
local
state,
you
can
totally
hook
up
a
node
debugger
to
the
process.
The
same
way
you
would
with
with
Express,
if
you
want
to
or
if
that's
your
style.
A
Yep
yeah
that
works.
You
can't
some
some
crazy
person
out
there
actually
got
chrome
inspector
working
with
the
remote
lambdas
I
haven't
played
with
this
myself,
because
I
gained
I,
don't
use
debuggers,
but
yeah,
it's
it's
there
for
you.
If
you
want
to
do
it
and
it
can't
work
remotely
to
Amazon
and
I
want
to
gloss
this,
the
user
experience
for
developing
these
things.
This
can
be
pretty
tricky.
So
this
is
a
lambda
function
in
all
of
its
glory
and
AWS
console.
And
apparently
this
one
gets
an
email,
login,
okay
and
we
have
no
errors.
A
Apparently
I.
Don't
believe
that
for
a
second,
so
this
is
a
maybe
gonna
leak.
Somebody's
information-
I,
don't
know,
but,
like
you
have
to
dig
around
logs
to
do
your
your
development
time,
stuff
and
cloud
watch
is
not
the
most
awesome
place
to
dig
around
for
logs.
So
a
lot
of
people
are
using
either
a
tool
called
honeycomb.
There's
another
one
called
who
me
oh
and
they
ingest
these
cloud
watch
logs
so
that
you
can
locally
search
structured
data
instead
of
poking
around
the
AWS
console.
A
It's
it's
less
of
a
problem
in
arc
because
we
have
the
isolated
staging
and
production
lambdas.
So
it's
very
easy
for
us
to
reproduce
a
bug
and
then
just
go
check
out
the
cloud
watch
logs,
but
some
people
like
having
more
robust
monitoring
solutions
for
their.
You
know:
production,
lambdas
and
I
recommend
that
you
definitely
look
into
that.
I
know.
I
want
to
you
can
see
our
dynamodb
needs
more
provisioning.
That's
funny!.
A
We
have
two
servers.
We
have
one
server
for
our
small
ones
today
website
the
name
of
our
company,
small
wins,
but
we
get.
We
ended
up
with
begin
calm
as
a
domain,
so
we're
not
using
this
one
really
anymore.
I,
don't
actually
even
know
how
this
is
running
but
I
know
there's
a
server
for
it
because
I
tagged
it
in
AWS,
and
this
is
funny
we
have
a
second
server
and
we
totally
don't
know
what
that
one
does.
A
We
know
we
started
at
like
last
year
and
I'm
kind
of
scared
to
turn
it
off
because
it
might
be
doing
something,
but
whatever
its
doing
I
don't
know,
I've
been
planning
to
get
in
there
and
figure
it
out,
but
yeah.
We
the
whole
thing
that
we've
built,
and
so
we've
built
a
bot
for
slack
and
a
companion
website
for
working
with
that
bot
that
does
tasking
on
a
per
channel
basis
and
it's
yeah.
A
It's
about
500
lambda
functions
in
production
with
about
10,000
users
and
our
AWS
bill
is
a
whopping
200
bucks
a
month
and
then
that's
all
going
to
dynamo.
Almost
all
of
that.
But
you
know
with
that
usage.
It's
cheap
people
say:
oh
dynamo
is
really
expensive
and
the
right
it
is,
but
it's
still
cheaper
than
a
DBA.
So.
A
Yeah,
yes,
I,
don't
know
because
I
don't
find
writing
state
machines
that
hard
and
node
has
great
libraries
for
it,
and
so
I,
don't
know
what
the
use
case
is
there,
but
I'm
into
them.
I'm,
yeah,
I,
don't
I
I'm!
Undone,
if
someone's
like
got
a
really
good
use
case,
open
up
open
up
bug
for
me
and
I
will
implement
that
I'm
totally
into
it.
I'm,
actually
a
little
more
excited
about
green
grass
and
lambda
at
the
edge
and
what
the
potential
could
be
there.
A
So
there's
these
new
flavors
of
lambda
green
grass
runs
on
IOT
devices,
which
is
kind
of
trendy,
but
I
think
there's
a
lot
of
potential
there
and
then
lambda.
The
edge
is
like
running
lambdas
like
right
on
cloud
front.
The
challenge
with
lambda
at
edge
I,
usually
only
get
40
milliseconds
of
execution
time
and
30.
Milliseconds
of
that's
gonna
be
nodes
startup,
so
you
got
ten
milliseconds.
You
can
do
a
lot
of
in
ten
milliseconds.