►
From YouTube: Fission FaaS on Kubernetes
Description
Kubernetes Community Days Bengaluru'21
Serverless & FaaS framework allow engineers to focus on creating value by writing code and not having to understand all underlying details. FaaS on Kubernetes is still in its nascent stage and evolving fast. Fission is a serverless framework for Kubernetes which is simple and fast. Fission is portable and works on Kubernetes - so you can deploy anywhere from cloud to on premise. This workshop will get you ready to use Fission and give you enough of a starting point to contribute.
A
Cool,
so
thank
you.
Everyone
for
you
know
taking
time
out
of
your
day
to
learn
this
workshop,
we'll
be
talking
about
fashion,
of
course,
as
you
know,
as
you
might
have
registered
already,
which
is
the
fast
framework
on
top
of
kubernetes.
A
So
before
we
go
into
the
actual
workshop,
you
know
why
don't
all
of
us
introduce
ourselves
talk
about.
You
know
our
background
in
docker
k
test
and
then
you
know
you're
interested
in
learning
fashion.
Then
we
go
on
the
table.
Cookie.
A
So
maybe
yeah,
let's
start
with
nikhil.
C
C
So
we
have
five
developers
who
work
in
the
back
end
and
devops,
and
my
interest
in
learning
future
is
because
we
have
been
looking
to
migrate
our
admin
application
to
our
own.
We
already
have
a
production
kubernetes
cluster,
so
we
want
to
have
serverless
apis
running
in
the
same
one.
So
we
came
across
open,
fast
and
fusion,
so
we
had
an
interest
in
that
and
that's
why
we
thought
of
trying
it
out
yeah.
That's
about
me.
D
Okay,
hi
everyone,
my
name
is
ganesh
and
I'm
from
pune,
so
I
started
as
an
intern
four
years
ago
at
red
hat
itself
and
currently
I'm
working
as
a
technical
support
engineer
primarily
on
openstack
at
openstack.
Basically,
so
I
would
like
to
switch
my
profile
to
devops
very
soon,
so
currently,
I'm
into
learning
phase.
To
be
honest,
so
I
would
rate
myself
as
an
intermediate
in
docker
and
kubernetes
and
fusion.
A
Great
great
guri's,
showing
over
next.
E
Yeah
hi
all
so
this
is
gary
sullivan
working
with
ibm
on
open
source
tools.
So
we
work
on
porting
various
open
source
tools
on
z
platform,
so
I've
been
working
with
adding
kubernetes
support
on
z
for
last
four
years,
I
worked
on
various
tools
on
kubernetes
and
only
kubernetes
sto
then
open
this.
E
Yes,
that's
it
and
the
reason
join
this
controller
to
know
more
about
fusion
and
what
are
its
functionality.
F
Hi
everyone,
my
name,
is
gaurav,
I'm
working
as
a
developer
advocate
with
infra
cloud
and
which
I'm
working
with
you,
charles,
as
well,
I'm
here
to
bridge
my
gap
with
vision,
so
that
I
understand
the
system
inside
out
and
outside
work.
I'm
a
docker
community
leader
and
like
vishal,
I
co-organize,
multiple
meetup
groups
in
pune.
A
Cool
sounds
good.
I
was
just
checking
if
anybody
else
is
joining
or
actually
just
all
of
us,
so
yeah
we're,
starting
with.
I
think
it's
only
15
minutes
faster
cool,
so
table
of
functions
and
not
contents,
as
you
can
imagine
the
fun
of
functions
there.
So
we'll
talk
about
you
know,
first
of
all,
understand
some
basic
concepts.
You
know
in
context
of
question:
what
is
the
environment?
What
is
the
function?
What
is
the
package?
A
What
is
the
trigger
and
stuff
here
affect
and
towards
the
end
of
it
will
get
some
sense
of
you
know
what
efficient
concepts
are
and
then
we'll
slightly
cover
like
a
very
brief
overview
of
architecture
and
some
of
the
internal
soft.
You
know
environment
that
you
need
to
understand
before
you
kind
of
go
further.
A
We'll
then
go
to
scheduling
and
access
execution
strategies,
and
I
think
there
is
a
lot
of
you
know,
kind
of
important
areas
here
which
are
very
different
compared
to
let's
say
other
platforms
and
and
we'll
talk
about
a
couple
of
those
in
detail.
Then
we'll
go
and
talk
about
even
sources
and
sinks,
and
you
know
what
they
mean,
how
it
is
built
out
in
fission
and
and
then
next
we'll
talk
about
fission,
specs,
cicd
and
in
a
bunch
of
other
areas.
A
Now
till
this
point
in
time
in
the
workshop
we'll
be
doing
some
hands-on,
but
it
will
be
a
little
lighter.
You
know
till
this
point
in
time
in
the
next
section,
which
is
putting
it
all
together
in
hands-on
demo.
You
know
we
will
actually
get
super
detailed.
We
have
like
four
or
five
demos
that
we
will
end
up.
You
know
working
on
and
based
on
time
and
stuff.
You
know
we
can
adjust
the
piece
of
course.
A
Next
we'll
talk
about
contributing
to
fission,
you
know,
and
there
are
multiple
avenues
to
contribute
in
session,
not
just
the
friction
code.
There
are
a
lot
of
interesting
areas
and
then,
finally,
we'll
talk
about
advanced
areas
like
workflows
or
multi-tenancy.
A
We'll
talk
about
couple
of
customer
use
cases
we'll
talk
about
how
they
scaled
out
fission.
You
know
in
production,
environments
and
stuff
like
that
and
yeah.
That's
that's
a
very
you
know
brief
of
the
what
we're
going
to
cover
today.
A
So,
let's
you
know,
get
the
basics
right.
I
hope
all
of
you
do
have
a
kubernetes
cluster
ready
either
kind
or
you
know,
gke
co.
Whatever
you
have,
you
know
equivalent.
Basically,
I
I
hope
you
also
have
friction
installed.
On
my
current
setup,
I
got
the
latest
release,
which
is
one
of
thirty
note
one
we
just
came
out
last
week,
but
you
can
also
use
anything.
You
know
1.12.0
or
later
sort
of
and
I'm
assuming.
Definitely
you
have
cuddle
and
friction
installed.
A
A
And,
of
course,
some
sort
of
you
know,
id
vs4
is
what
I'm
using,
but
you
can
use
whatever
you
know
works
for
you
easily
cool,
so
just
making
sure
all
of
us
are
set
up
with
with
the
preequisites
or
do
we
need
any
help
in
that
good,
I'm
I'm
assuming
we
are
good
good,
great
all
right.
So,
let's
start
with
some
basic,
you
know
components
or
or
basic
understanding
of
concepts
and
fashion.
A
Now,
before
I
go
there
right,
how
many
of
you
have
worked
with
custom
resources
and
understand
what
a
custom
resource
is
in
the
context
of
kubernetes.
A
Great
great,
perfect,
perfect
cool,
so
accordingly,
I'll
try
to
you
know,
message
the
concepts,
because
if
you
news
here,
I
would
probably
talk
a
little
differently,
assuming
you
know
like
crs
and
stuff,
but
that's
cool.
A
Yes,
we
are
recording,
I
believe
it
should
definitely
be
available
to
the
attendees
and
probably
eventually
to
slightly
larger
audience
as
well,
but
I
I'll
check
with,
but
you
definitely
should
get
access
to,
recording
sure
sure,
cool
thanks
cool,
so
fashion
is
a
fast
and
simple.
You
know
functions
framework
for
kubernetes,
it
is
built
on
top
of
kubernetes
and
crs,
and
you
know,
controllers
is
what
it
is
basically
build
off
now.
A
One
thing
it
allows
you
to
do
is
only
write
the
code
and
not
worry
about
like
an
image
or
kubernetes
manifest
and
stuff
like
that
they
are
abstracted
from
you
to
the
extent
you
want.
It
is
not
that
they're
completely
abstracted
from
you
if
you
want
to
still
play
around
with
them,
you
can
do
that,
but
if
you
don't
want
to
play
around
with
them,
you
can
still
stay
abstracted
and
still
write
code
and
run
functions
on
kubernetes.
A
We
promise
100
millisecond
cold
start
in
you
know,
especially
like
a
warm
pool
scenario.
We'll
talk
about
warm
pool
slightly
later
in
the
in
the
workshop.
You
can
also
build
your
core
in
the
cluster
and
you
don't
have
to
do
it
locally
and
it
does
support
integration
with
quite
a
few
event
sources.
We'll
talk
about
that
again,
you
know
definitely
in
detail,
and
there
is
support
for
tolerations
volume.
Security
context
in
a
bunch
of
other
recommended
features
that
we'll
talk
about
again
later
for
sure.
A
Secondly,
it
allows
you
to
not
just
write
functions
but
also
kind
of
deploy
and,
and
it
allows
you
to
not
just
give
you
the
code
profession,
you
can
also
give
it
a
docker
image
right
so
based
on
your
preference
and
how
comfortable
you
are
or
how
detail
you
want
to
get
into.
You
know
the
system
you
can
decide.
I
just
want
to
write
the
function
code
and
not
worry
about
docker
files
or
commands
manifest
or
you
can
say
hey.
I
already
have
a
docker
image
simply
run
it
for
me
right.
A
So
all
the
kind
of
formats
are
separate
by
phishing.
Now,
once
you've
deployed
this
to
fission,
you
actually
actually
want
to
call
that
function
or
micro
service.
You
know
some
way
or
the
other
right.
So
http
is
one
way,
of
course,
and
within
http
when
shtp,
it's
a
broader,
you
know
kind
of
umbrella
term
you're
talking
about
web
sockets,
potentially
grpc
in
future,
but
all
those
things
are
separate
for
you
to
call
it.
F
A
Okay,
am
I
audible
now
yep,
okay,
something
happened
with
audio
discontent
and
then
connected
again
cool,
so
yeah
I
was
just
saying.
The
broad
http
umbrella
you
know
is
is
one
of
the
ways
you
can
call
it.
The
other
ways
you
can
call
functions
is
using
cron.
A
So
you
can
say
you
know
every
x
minutes,
every
x
hours
and
whatever
that
frequency
looks
like
and
if
those
ways
functions
can
be
called
the
last
part
and
which
is
probably
you
know,
probably
one
of
the
majority
parts
is
the
message
queues
or
data
sources,
as
I
would
like
to
call
them
right,
because
nowadays
it's
not
just
about
message
queues,
it's
also
about
data
sources,
any
change
in
them.
Invoking
a
function
and
stuff
right.
A
So
for
message
to
integration,
we
use
a
framework
called
cada,
it's
an
open
source
project
and
we
have
written.
You
know
a
layer
on
top
of
that
and
know
enable
some
of
the
integrations
in
an
easier
way.
You
know
using
data
as
underlying
technology
and
with
cada.
It
supports
a
whole.
Bunch
of
you
know
integrations
so,
as
of
today
we
do
support
like
kafka.
We
do
support
nats.
We
do
support
amazon,
kinesis
sqs
and
then
there
is
azure.
A
You
know
it's
coming
soon,
gcp
pop
service
separate,
so
there
are
about
six
or
seven
ads
operate
today.
There
are
more
that
needs
to
be
added,
and
that's
you
know
one
of
the
areas
you
know
where
folks
can
contribute
and
I'll
come
to
that.
A
That
is
one
of
the
easiest
areas
to
get
started
in
terms
of
open
source
contributions
right
and
since
you
have
a
platform
like
this,
you
definitely
want
solid
observability
in
place,
so
we
do
support
and
integrate
with
the
elastic
commissions,
jager
graph-
and
I
know
all
the
major
ones-
and
we
are
also
piloting
a
few.
You
know
additional
and
interesting
tracing
platforms
beyond
jager
as
well.
So
that's
that's
like
a
basic
overview
of
fiction
cool.
A
So
that's
the
basic
system,
we'll
look
at
architecture
and
other
components
in
indefinitely
in
more
detail
right
later.
So,
let's
start
with
first
of
all:
okay,
before
we
go
there,
if
you
are,
you
know
on
github,
I
would
appreciate
if
you
can
go
and
star.
You
know
us
and
and
of
course
other
repositories
on
fashions.
You
know
sub
organization
yeah
and
you
can
find
us
on
slack
and
stuff.
A
The
documentation
is
on
docs.fashion.io
and
then
there
is
blog
and
fishnet.io
has
a
community
tab
which
you
can
find
how
to
join
slack
and
all
that
stuff
we
see
cool.
So
that's
that's
sort
of
the
basics
you
know
of
fiction
so
far
cool.
So,
first
of
all,
let's
talk
about
environment
right,
so
environment
is
what
I
would
think
of.
If
you
have
to
think
of
like
the
older
days.
You
know
when
you
had
a
vm
and
then
you
started
a
program
on
it
right.
A
So
vm
is
like
an
environment
on
which
you're
running
right,
you
would
say
I
want
to
go
through
xyz.
I
want
apache
to
be
running
on
that.
You
know
some
basic
things
like
a
contract
for
you,
basically
right
so
think
of
efficient
environment
as
an
equivalent
of
that
where
you
have
the
os
and
runtime
right,
and
you
could
say
I
want
alpine,
I
want
to
move
whatever
you
want.
A
Basically
right
and
you
have
some
basic
dependencies
already
available
to
you
and
that's
basically,
the
environment
and
in
in
the
context
of
a
language
you
could
have
one
environment
for
golang.
You
could
have
another
for
python
or
even
within
python.
You
could
say
I
have
one.
You
know
python
environment
for
data
science
kind
of
work,
and
then
I
have
another
python
environment,
which
only
runs
a
web
server,
a
simple
web
server.
A
A
Important
point
to
note:
here:
there
is
no
user
or
function
code,
yet
here
right
so
environment,
only
a
basic
level
fraction.
Where
you
can
say
I
get
this
basic
current
time
and
these
dependencies
available
and
will
will
add.
You
know
stuff
on
top
of
that
display.
So
that
is
the
environment
as
a
basic
concept.
A
Now,
if
I
do
take
examples
of
environment
right,
look,
if
you
look
at
the
node
js
environment,
which
is
right
now
available
in
session,
it
has
a
node.js
alpine
as
the
core
basic
image,
and
you
have
a
few
dependencies
built
in
like
express.
There
is
code.
There
is
request-
and
there
is
a
perspective
as
well
right
so
that
that
was
that's
what
forms?
Basically,
the
node
is
environment.
A
Similarly,
there
is
a
typo
here.
It
should
be
go
here.
It's
a!
If
you
look
at
the
go
environment.
It
basically
gives
you
a
simple
ubuntu
1804
and
it
has
a
go
server
running.
On
top
of
that,
simple
vanilla,
you
know
goes
over
using
mux,
nothing,
nothing
competent.
So
those
are
like
some
of
the
environment.
Examples
we'll
go
look
at
the
code
shortly
or
you
know
at
some
logical
point,
basically
right,
but
once
you
have
environment,
let's
go
ahead
and
create
an
environment
right,
so
you
can
follow
along.
A
You
know
on
this
one
with
me
so
that
you
know
we
are
at
the
speed
and
you
know
when
we
actually
test
out
things.
We
can
actually
try
them
out.
So
what
I
suggest
is
open
up
your
terminals
and
copy
this
command.
You
can
find
it
on
the
slides,
but
if
you
also
copy
the
fiction
examples-
repo
and
if
you
go
to
samples
folder
and
from
there
to
workshop
folder,
you
will
find
the
pp
is
linked
here.
But
I'll
just
share
the
link
of
the
ppt
right
now
here
in
the
chat.
A
B
A
It
tells
me
it's
a
node
environment
like
a
name
node
and
it
uses
a
image
called
friction,
slash,
node
env,
and
it
gives
you
a
bunch
of
other.
You
know
parameters
like
what
is
the
pool
size.
You
know
what
is
the
min
memory
maximum
again
bunch
of
bluetooth
good,
so
come
back
to
our
slide
cool,
so
we
cleared
the
environment.
You
know
we
have
pool
size
one
and
we'll
talk
about
what
pool
size
means
later.
But
let's
you
know
kind
of
go
and
understand
the
concepts.
A
Basically
now,
once
you
have
your
environment,
you
ideally
want
to
deploy
some
code
on
top
of
that
right.
So
that
is
where
the
concept
of
package
comes
in
picture.
A
package
is
nothing
but
the
user
code
and
some
dependencies
right-
and
this
could
be
a
couple
of
variations.
We'll
talk
about
that.
So
you
could
say
I
have
just
a
single
code
file
like
in
python
or
node.js.
You
write
just
a
simple
script.
It
could
just
be
a
single
file,
no
dependencies
whatsoever,
just
the
pure,
vanilla
and
time
based.
A
So
then
you
can
use
the
code
flag
to
say
I
have
a
single
code
file,
but
more
often
than
not,
you
will
have
multiple
source
files
and
you'll
have
dependencies
right.
So
you
might
say
I
need
this
library,
a
library
b,
whatever
that
sort.
In
that
case,
you
can
specify
using
sort
now.
The
idea
here
is,
if
you
specify
source
you
want
that
source
to
be
compiled
dependencies
to
be
downloaded
and
everything
to
be
packaged
as
a
package.
A
That's
where
you
need
build,
of
course,
so
you'll
you'll
expect
the
patient
to
bill
it
out
for
you.
So
you'll
have
passed
the
flags
accordingly,
a
lot
of
times.
You
might
also
say
hey.
You
know
what
I
don't
want
to
use
the
fissions
builder
concept.
I
will
build
the
code
outside
of
session
and
just
provide
my
you
know,
deployable
archives
to
fiction,
basically
right
in
that
case,
what
you
can
do
is
you
can
still
build
stuff
outside
and
you
can
use
the
deploy
flag
to
provide
a
fiction,
the
the
deploy
bill
archive
right.
A
A
So
there
are
three
forms
of
you
know
in
which
you
can
give
package,
but
inherently
it
is
basically
you
know
user
code
and
dependencies.
You
know
as
abstract,
conceptually
right
so
now
you've
got
base
environment.
We
got
a
package
which
contains
our
code
and
dependencies
right.
So
good
thing
we
have
covered
so
far.
A
Now,
let's
create
a
package
as
well
right,
and
would
it
help
if
I
share
this
slide
with
all
of
you
folks
to
copy
the
commands,
or
are
you
comfortable
typing
this
out?
You
know
whatever
you
see
here,
but
I
can
type
and
you
know
paste
the
commands
as
well.
While
we
are,
you
know
doing
this
in
the
chat
so
that
that
gets
easier
for
you.
So
let
me
do
that.
A
A
So,
as
you
can
notice
here,
one
thing:
I'm
using
the
code
flag.
That
means
I'm
having
a
single
file
with
no
dependencies
whatsoever.
Let
me
also
open
this
url.
So,
let's
see
what
is
the
code
that
we
are
running
basically
right
now,
as
you
can
see
it's
a
simple
js
function,
we
follow
a
specific
syntax.
You
know
where
we
say
ascent
function
and
context.
You
know,
is
the
signature
basically
and
within
that
you
know
thing
whatever
you
write
is
basically
the
function
code.
Basically,
and
this
is
what
we
are
creating
package
with.
A
So
let
me
go
ahead
and
do
that
now.
Printing
is
gonna,
go
and
actually
fetch
that
you
know
js
file
and
create
and
hello
js
package
right
now.
If
I
do
again
fission,
pkg
or
you
know
short
form
for
package
list,
you
will
see
a
package
you
know
being
created
and,
as
you
might
have
noticed,
the
build
status
here
is
already
succeeded
because
node.js
doesn't
require
like
compilation
and
we
don't
have
a
dependency
to
be
downloaded.
A
So
that's
right,
like
straight
off,
succeeded
had
we
had
a
code
base
and
then
some
libraries,
then
would
have
you
know,
gone
into
running
state
and
then
eventually
going
into
succeeded,
state
and
we'll
talk
about
that
again
shortly
later
right.
So
that's
the
package
part
of
it
right
cool.
Now.
What
is
the
function
right?
You
got
the
base
environment.
You
got
the
code
and
and
dependencies
everything
packed
in
right.
You
want
some
more
configuration
on
top
of
that
right.
You
might
want
to
say,
hey
the
scaling.
Behavior
should
be
like
this.
A
The
timeout
should
be
like
this.
These
secrets
should
be
loaded
or
this
conflict
map
should
be.
You
know,
mounted
and
stuff
like
that
yeah.
So
all
these
things
you
know
is
you
know
what
a
function
is
basically
right
and
and
function
also
has
a
lot
of
execution
strategies,
we'll
talk
about
execution
strategies
slightly
later,
but
but
that's
what
is
function
snehal.
I
just
see
that
you
joined
we
kind
of
started
like
about
15
or
minutes
ago.
A
G
Vishal
so
good
morning
and
I'm
snehal
pandey,
I'm
working
as
a
software
engineer
in
siemens
and
I
have
overall
six
and
a
half
years
of
it
experience
and
out
of
which
two
and
a
half
years
is
in
devops,
and
I
have
read
about
fiction
on
the
company
website
and
for
cloud
company
website,
and
I
found
it
really
interesting
so
and
later
on.
I
came
to
know
about
this
session,
so
I
thought
of
joining
and
in
future,
if
I
I
I'm
planning
to
contribute
as
well.
A
Cool,
so
if
you
plan
to
follow
along
in
the
workshop,
we
can
you
know
after
like
after
you
finish,
maybe
initial
one
or
two
sections
we
can
come
back
and
you
know
have
you
set
up
or
whatever.
G
Okay,
okay,
vishal.
I
just
have
one
question
because
I'm
I'm
late,
I
missed
initially,
so
will
we
have
the
recording
available
of
this
session.
A
Yes,
recording
will
be
available
and-
and
you
know
you
can
talk
to
organizers,
they'll
share
with
you
and
in
the
meantime,
if
you
go
to
installation
like
the
page
and
if
you
have
a
kubernetes
cluster
running,
you
can
just
get
the
you
know,
efficient
setup
done
so
that
you
can
follow
along.
You
know
what
we
are
doing
as
well.
A
G
A
Cool
you
that's
a
function,
it's
basically
a
combination
of,
or
it
uses
the
environment,
it
uses
package,
and
then
it
has
a
bunch
of
other
things
which
are
like
runtime
configs.
On
top
of
that,
you
know
is
what
is
called
functions,
pretty
good
and
create
a
function
using
the
package
and
the
environment
applicated
earlier
right
and
let
me
again
paste
that
command,
so
you
can
follow
it
on
and
use
the
same.
A
Lot
the
invocation
would
take,
but
the
creation
shouldn't
take
so
much
time.
C
B
A
That
is
weird.
I
have
never
seen
this
happen
with
me
at
least
something
to
investigate,
probably.
A
Okay,
okay,
ganesh
cool,
so,
as
you
can
see,
the
function
is
now
we
can
do
a
list
of
function
and
it
can
list
a
function.
It
is
of
type
tool
manager
I'll
talk
about
what
that
means.
Slightly
later.
Now,
let's
go
and
press
this
function
right,
so
you
can
actually
use
the
handy
command
within
function
to
say,
question
function,
test
and
hello,
js
and
there
we
go.
Hopefully
hello
world
happier
shared
is
up
here:
okay,
great
cool,
so
that's
a
very
simple
basic
function.
We
created
using
node.js
environment
and
one
simple
you
know.
A
But
you
know,
I
use
the
efficiency
function,
utility
to
call
function,
but
there
are
various
ways
to
call
functions
right
and-
and
we
talked
about
this
slightly
earlier-
so
one
is,
of
course
over
http
right.
It
could
be
grpc,
websocket
or
plain.
You
know,
standard
http,
using
stupid
triggers
so
similar
to
environment
and
function
and
package
objects,
so
http
trigger
is
object.
Infection
which
allows
you
to
expose
functions
outside
of
the
cluster
using
you
know,
http
as
a
way
of
calling
them
right.
A
The
second
way
you
can
expose
those
functions
outside
is
using
message
queue
or
data
change
as
a
trigger
right,
and
they
are
broadly
categorized
into
mqtrigger,
which
stands
for
message
queue
trigger,
and
I
think,
as
I
said
before,
message
queue
is
probably
slightly
narrowing
down,
but
basically
what
what
they
do
is
you
know
any
chin
anytime.
There
is
a
change
in
data
or
anytime.
A
There
is
a
message
in
the
queue
that
trigger
will
call
a
specific
function
right,
that's
what
the
mq
trigger
does,
and
similarly
we
have
time
trigger
which
basically
calls
the
function
based
on
some
time.
But
you
know
formatted
like
every
hour
minute.
Whatever
you
want
to
do
it
now,
in
the
previous
step,
of
course,
we
did
not
create
a
trigger.
So
now
let
me
go
ahead
and
create
a
trigger
a
simple
http
trigger
and
I'll
again
paste
this
command
here
so
that
I
can
use
it.
A
Trigger
and
now
important
point
is
you
need
to
get
the
address
of
the
router.
So
in
my
case
the
router
has
a
load.
Balancer
right
here
address
right
and
I
can
use
that
if
you're
using
kind
of
something
similar,
I
believe
you
can
use
your
load
port
along
with
the
host.
You
know,
whichever
is
exposed
right,
but
if
I
do
here,
for
example,
curl.
A
And
use
this
ip
address
and
then
say
what
is
url.
I
use
hello.
Ideally,
I
should
get
a
response.
Okay,
I
do
get
a
response
right,
so
that
is
the
hello
world.
So,
instead
of
using
a
function
test
utility,
I
basically
now
exposed
my
function
to
outside
world.
So
now
you
can
use
this
url
any
of
you.
You
can
use
this
url
and
actually
call
the
functional.
So
that's
what
we
did
just
now
cool.
So
you
know
in
a
lecture.
A
You
know
we
just
covered
some
basics
of
you
know:
basic
objects
of
basically
efficient
environment,
which
is
basically
a
base.
Runtime
of
a
function
package
is
a
function
code
and
dependencies
put
together
and
then
function
basically
combines
your
environment
and
package
with
some
runtime
configuration
to
create
a
function
on
trigger
you
have
you
know,
multiple
triggers
has
to
be
a
method,
to
base
figure
out
a
time
based
trigger,
and
you
can,
you
know,
call
the
functions
variously
right.
There
are
a
couple
of
other
objects
like
kubernetes
watch
triggers
and
canary
conflicts.
A
We
did
not
cover
them,
they're,
not
super
important
to
the
core
of
what
we
need
to
understand
in
the
workshop,
but
I'm
happy
to
explain
them
later.
Once
you
have,
you
know
we
have
gone
through
the
workshop
and
right.
So
that
was
the
first
section.
Basically
right,
we
talked
about
very
basic,
simple
concepts:
everybody
following
so
long
any
doubts
questions
don't
hesitate
to
stop
me
in
between
as
well.
You
know
and
ask
questions,
please
cool.
Should
we
go
ahead.
A
Great
now,
let's
talk
about
scheduling
and
execution
strategies,
and
I
think
this
is
where
a
lot
of
you
know
important.
You
know
things
around
execution.
We
will
talk
about,
so
there
are
right
now,
broadly
three
execution
strategies
in
fashion.
One
is
called
a
pool.
A
Manager,
pool
manager
maintains
an
ideal
pool
idle
pool
as
in
a
pool
of
warm
pods
which
are
ready
to
serve
requests
as
soon
as
there
is
a
request
in
right
and
they
are
made
into
specific
functions
on
the
fly
right
and
the
goal
here
is
to
optimize
for
resources,
and
there
is
definitely
some
amount
of
latency
overhead.
Now
we
try
to
promise
100
millisecond
overhead
in
the
request
path,
but
that
really
depends
a
lot
on
the
language
right
like
if
you're
using
java
versus
go
versus
python.
A
There
is
a
little
bit
of
variation
in
that
over
here.
The
second
execution
strategy
is
called
new,
deploy.
What
it
basically
does
is
it
basically
runs
a
service
almost
all
the
time,
and
there
is
no
latency
overhead,
but
there
is,
of
course,
cost
of
you
running
that
service
almost
forever.
Basically,
right-
and
this
is
almost
like
running
a
microservice-
if
you
want
to
call
it
that
way,
is
using
fiction.
A
It
basically
creates
and
we'll
talk
about
that
in
detail
what
it
creates
right.
The
third
is
a
continuous
function,
as
I
talked
earlier,
where
you
give
a
fashion
a
container
image
and
it
will
simply
run
it
for
you.
This
is
still
under
development.
This
will
be
merged
sometime
this
week
or
next
week
into
the
master
branch
and
will
be
released
in
the
next
release.
A
But
you
know
this
is
something
very
new
feature
in
the
fishnet
itself.
Right
and
here
you
don't
need
to
create
environments,
you
can
directly
create
a
function
using
damage
and
that's
about
it
and
rest
of
the
things
like
bigger
or
you
know,
other
things
will
will
still
remain
cool.
Now,
let's
go
and
understand
the
pool
manager
strategy
a
little
more
in
detail,
there's
a
lot
going
on
over
there.
Okay.
A
So
when
we
created
an
environment,
it
creates
a
pool
of
pool
size
three
by
default.
We
cured
one
in
our
case
when
we
created
right
and
what
basically
it
does
is
it
creates
a
deployment
with
a
replica
equivalent
of
pool
size
right
now.
This
pool
acts
as
a
warm
pool
and
there
is
no
package
yet
so
there
is
no
function
or
anything
of
that
right.
So
what
happens
is
it
creates
just
the
base
runtime
environment
with
as
many
replicas
as
you
specified
in
the
pool
sites
as
a
deployment
right?
A
So
that
is
the
first
thing
that
happens
now
when
a
request
comes,
what
happens?
Is
one
environment
is
taken
outside
of
this
deployment
by
rewriting
so
the
way
kubernetes
deployments
work?
Is
you
label
a
deployment
certain
way
that
they
are
part
of
a
deployment
right,
a
part
certain
way
so
that
they
are
part
of
a
deployment?
The
moment
you
remove
that
label?
A
It
doesn't
remain
part
of
that
deployment
right,
so
it
is
taken
out
and
since
it
is
taken
out
now
the
replicas
has
dropped
to
two.
What
happens
is
kubernetes
deployment
will
eventually,
you
know,
restore
that
from
two
to
three
again,
so
your
pool
still
remains
at
size.
Three
and
now,
let's
see
what
happens
with
the
pod,
we
took
out
hey
carl,
I
see
you
just
joined.
B
A
We
kind
of
started
around
10
15,
so
you
might
have
a
lot
lost
a
little
bit
of
initial
thing,
but
recording
will
be
of
course
available
later.
Are
you
set
up
with
friction
everything.
H
No
right
now
I
don't
have
a
setup.
Okay,.
A
H
A
H
Right
so
I
I
currently
work
as
a
software
engineer
with
gslab
and
we
have
started
with
fission
sometime
back,
so
we
are
just
here
to
so.
We
work
with
kubernetes,
docker
and
all
the
stuff,
but
right
now
we
are
looking
into
fission
and
trying
to
see
how
we
can
use
fission
as
well
right.
A
Cool
so
I'll
continue,
but
yeah
I'll
come
back
and
probably
catch
you
up
on
some
concepts.
A
little
short
later
here.
A
Cool
so,
as
I
was
talking
earlier,
one
environment
pod
is
taken
out
and
the
same
is
restored
by
your
deployment.
Now,
let's
see
what
happens
with
this
part,
which
was
taken
out
of
deployment
right
now,
the
part
which
is
taken
out
of
deployment.
What
happens
is
the
the
code
which
you
know
for
the
function
which
needs
to
be
executed.
The
code
is
retrieved
from
the
storage
service
and
then
that
code
is
basically
dropped
onto
that
environment
right.
A
So
this
is
dynamically
loaded
into
the
memory
and
then
the
actual
code
is
executed
by
by
the
executor.
Basically-
and
this
all
happens
on
the
request
path,
you
know
specializing
of
the
pod,
meaning
taking
that
part
out
of
the
pool
getting
the
code
and
then
executing
that
right
and
we'll
look
at
the
mechanism
of
actually
loading
the
code
in
detail
later.
A
But
but
just
let's
for
the
time
being,
you
know
understand
that
now,
if
more
request
comes
right,
more
parts
are
specialized,
meaning
you
take
out
one
part:
you
get
the
code,
you
loaded
that
into
the
you
know,
part,
and
then
you
you
know,
then
you
route
the
request
that
part.
Basically
right
and
let's
say
now,
there
are
four
requests
coming
right:
four
parts
of
specialize:
now,
let's
say
one
of
these
finishes
and
the
fifth
request
come.
A
You
can
reuse
this
part,
because
there
is
nobody
right
now
using
that
part,
basically
right
and
and
this
number
the
number
of
maximum
specialized
parts
at
any
given
point
in
time
is
same
as
concurrency.
So
concurrency
is
the
parameter
that
you
specify
when
you
create
a
function,
you
can
say
I
want
a
pool
size
of
five,
but
I
want
a
concurrency
of
thousand,
for
example,
right,
so
no
more
than
thousand
function
or
should
be
created
at
any
point
in
time.
Right,
that's
that's!
The
concurrency
concept
confession.
A
You
can
also
do
this.
You
can
say
my
one
function,
part
which
is
specialized
can
handle
more
than
one
request
right,
so
request
per
power.
Rpp
is
the
parameter
that
is
configured
now
right
now,
it's
configured
to
three,
so
each
function
part
will
be
sent
three
requests
at
the
same
time.
So
technically,
if
you
have
four
specialized
parts,
you
could
be
serving
12
requests
at
the
same
time
right
and
these
things
are
provided
based
on
use
case.
A
You
can,
you
know,
fine
tune
and
you
know
make
them
like
work
for
your
use
case
right,
like
some
users,
won't
say
they
say.
I
want
to
execute
only
one
request
at
a
time
in
a
port
because
they
know
there
is
some
user
sensitive
data,
and
I
don't
want
that
to
be
shared
with
anybody
else,
whatever
that's
all
right.
A
So
all
those
things
drive
you,
you
know,
how
do
you
set
concurrency
and
I
know
rpp
and
stuff
right
now,
once
your
all
requests
are
gone
and
there
is
no
more
request
coming
in
session
will
wait
for,
let
us
say
a
time
of
you
know
duration,
which
is
ideal
timeout
configurable
for
function.
These
specialized
parts
will
be
killed,
right,
they'll
be
cleaned
up.
Basically,
your
pool
still
remains
pool
is
not
affected.
Basically
right
and
if
say
more,
requests.
A
A
A
So
if
you
go
by
this
logic,
when
there
is
no
request
coming
in,
you
are
maintaining
only
a
three
part
deployment,
but
when
there
are
requests
coming
in
you're
scaling
that
out
to
16
parts
without
doing
anything,
all
pre-configured
so
to
speak
right
and
the
scaling
scale
out
is,
is
what
makes
it
so
efficient
in
using
resources.
A
Now
in
real
world,
of
course,
people
do
have
multiple
languages
or
multiple
environments
right.
So
so
something
like
this
would
be
a
more
realistic
scenario
where
you
have
one
environment
of
python,
with
replica
3,
another
environment
of
node.js,
with
replica
5
or
pool
size
5,
because
maybe
we
are
expecting
knowledges
to
get
more
requests.
We
we
keep
a
slightly
bigger
pool
size
there
on
python,
based
environment.
A
There
are
three
functions,
a
b
and
c,
and
as
the
requests
come
in
they're
all
specialized
executed
and
cleaned
up
and
all
the
stuff
right
and
for
node.js
environment,
we
have
three
more
functions:
p,
q
and
r
right
again,
based
on
the
requests
coming
in.
They
are
specialized,
executed
and
then
cleaned
up,
as
as
you
go
on
the
fly,
but
even
with
this,
for
example,
if
there
are
no
requests
coming
in,
we
are
only
mentoring.
A
These
eight
parts
and
when
there
are
requests
coming
in,
we
are
measuring
probably
about
20,
25,
odd
parts,
scaled
out,
dynamically
and
then
scaled
back
to
you
know
the
earlier
number,
the
zero
number
or
tool
number,
basically
right
cool.
So
that's
a
basic
cool
manager
working
now,
when
do
you
use
full
manager
right,
so
it
is
really
good
for
dynamic
workloads
when
you
expect
something
to
suddenly
burst
and
then
suddenly
burst
down.
You
know
on
the
fly
you
know,
and
you
don't
know
that
pattern
is
clear.
A
It
is
also
a
good
fit
for
event,
driven
workloads.
The
reason
being
we
are
adding
a
little
bit
of
latency
here
in
the
request
path
right,
so
we
want
to
make
sure
that
you
know
the
latency
can
be
accommodated
by
the
workload
and
then
you
can
use
one
common
environment
by
many
functions.
So
that
is
another.
You
know
use
case
where
you
can
say
hey.
I
have
this
python
environment
and
I
want
my
all
developers
to
use
the
same
environment
and
then
I
put
on
top
of
that
right.
A
So
that's
where
also
you
could
use
this
single
environment
and
multiple
functions,
cool
ashrae,
I
see
you
joined
recently.
A
A
F
A
Right
so
think
of
them,
like
warm
pool,
is
almost
like.
You
know,
you
have
started
a
pod
because
see
if
a
request
come
starting.
A
pod
will
take
a
lot
of
time
right
you.
So
you
start
that
part
already.
You
also
have
a
server
running
in
that
part.
So
when
the
request
comes,
all
you
do
is
check
the
code
and
execute
it
right
and
if
it's
called
warm
odd
so
that
you
know
the
time
to
execute
when
the
request
comes
is
like
minimally
inaudible.
A
Cool
so
again
feel
free
to
stop
me.
If
you
have
questions
I'll
continue
all
right,
so
how
about
watching
this
in
action?
So
let's
do
that
right.
So,
first
of
all,
I'm
gonna
do
get
the
parts
in
the
friction
function,
namespace
right
as
you
can
see
right
now.
There
is
only
one
part
and
it's
a
pool
part
I'll.
Tell
you
why
it
is
a
pool
part
right.
A
A
A
So
there
are
two
parts
one
pod
was
taken
out
of
pool
right
and
as
soon
as
we
took
out
the
pool
part,
the
the
original
part
was
replaced
by
a
deployment
with
a
new
pod
and
the
second
part
the
paw
that
was
taken
out
was
used
for
execution.
Now,
if
I
look
at
labels
of
these
right,
you
will
see
this
part
has
a
function
name
called
hello.js.
This
means
this
part
is
right
now,
a
function
part
actually,
and
this
part
doesn't
have
a
function
name
and
manage
equal
to
true.
A
That
means
it's
part
of
the
pool
right,
and
I
can
again
show
you
this.
You
know
with
by
probably
creating
like
three
warm
pool
and
then
you
know
firing
a
request,
but
fundamentally
it's
the
same
mechanism
right
initially
there
was
one
part,
and
now
there
are
two
parts
right
and
if
I
fire
a
request
again,
it
won't
add
another
point:
it'll
simply
use
one
of
those
parts
to
execute
the
same
request:
audit
request,
okay,
it
did
create
for
some
reason.
A
I
believe
that
is
either
a
bug
or
some
configuration
problem
from
my
side.
But
now
we
have
two
hello
jay
spots
right,
so
it
did
create
new
parts
and
if
you
wait
for
another
like
two
to
three
minutes,
these
two
parts
which
arbitrated
for
hello,
jay's
function,
they'll,
be
cleaned
up,
they'll
be
cleaned
up
by
a
fission,
and
you
know
I
will
then
have
just
one
pod
in
the
pool
over
there.
Basically
right
cool,
so
that's
the
thing.
Let
me
do
curl
again
and
see
if
still
create
another.
A
Or
a
bug
or
my
rpp
settings
might
be
wrong
or
something
like
that
cool.
So
for
every
case
it
actually
created
a
part.
It
would
have
to
use
that,
but
maybe
my
configuration's
a
bit
off
there,
but
you
get
the
point
for
every
request.
We
create
a
part
and
reuse
that
part
if
required,
or
you
know,
based
on
configuration
and
stuff.
B
A
Meaning
you
are
saying
at
a
time
only
execute
one
request
in
a
port:
okay,
because
you
can
see
that
one
is
terminating
the
oldest
one.
Basically,
so
now
we
should
have
only
three
parts
running
and
then
one
terminating
another
two
will
terminate
in
a
short
line.
Basically.
E
E
Yeah,
so
I
can
relate
most
of
that
means
I
can.
I
worked
on
open
with
so
I
can
relate
more
to
the
stuff
here.
E
A
Correct
excellent
question
so,
right
now,
what
we
did
with
pool
manager
is
basically
cold
start,
meaning
we
had
a
pod
ready
and
when
the
request
came,
we
did
some
stuff
and
then
executed
the
request
right.
So
that's
basically
cold
start.
The
other
executor
that
we
have,
which
is
full
new
deploy,
is
basically
warm
stack.
What
it
does
is
it
already
does
all
that
operation
beforehand
before
the
request
comes
and
then
simply
routes
the
request
when
it
comes
actually.
E
A
Cool
so
now,
coming
to
a
new
require
now
new
deploy.
Basically
what
it
does
is
the
moment
you
create
the
function
and
environment,
it
creates
a
function,
part
or
deployment
and
then
creates
hp
and
also
creates
a
kubernetes
service.
All
this,
when
you
create
immediately
when
you
create
the
function.
So
now,
when
the
request
comes,
there
is
no
loading
of
code.
Anything
of
that
sort
of
happening
on
the
fly.
A
A
Great,
so
and
and
the
new
deployment
also
has
hpa,
so
if
the
more
request
comes
in
it
will
keep
scaling
those
parts
you
know
to
you
know
one
two,
three,
four:
whatever
you
want
set
it
out
to
be,
and
you
can
always
say
this
is
my
minimum
scale.
A
This
is
a
maximum
scale
and
you
know
that
controls
the
number
of
replicas
and
stuff,
and
here,
of
course,
we're
talking
about
zero
latency,
because
there
is
no
cold
start
happening
here
now,
of
course
the
tradeoff
is,
you
are
saying
I
have
to
have
always
you
know
whatever
number
of
parts.
I
need
always
running
basically
right
now
when
to
use
deployment,
new
deployment
executor.
A
Now,
if
you
can't
tolerate
any
latency,
let's
say
you're
serving
a
website,
you
know
a
static
site
or
whatever
right,
you
don't
want
any
latency
in
the
user
request
path.
That's
when
you
should
use
new
requirement,
the
workloads
are
fairly
static
and
they
don't
change
it's
another
use
case
because
in
case
of
pool
manager,
you
scale
out
very
fast
and
then
scale
you
know
back
in
as
required,
and
this
can
also
be
a
great
way
to
use
it.
A
A
Cool,
so
I'm
getting
the
same,
or
I
had
to
change
the
name
of
course,
hello,
js
I'll
call
it
let's
say
warm
right
and
I'm
using
the
same
environment,
I'm
using
the
same
package,
but
I'm
just
entering
the
executor
type
right
now.
Let's
watch
what's
happening
in
that
namespace
as
soon
as
we
do
this.
A
A
A
I
A
A
Okay,
I
think
I'll
file,
a
bug
for
this.
A
Okay
and
which
example
file
you're
mentioning
ganesh.
D
Along
with
you,
hello,
js1,
giving
me
4.4,
but
the
one
which.
D
A
There's
probably
there's
something
wrong
with
my
function:
code
cool,
so
that
was
a
new
deployment.
You
know
briefly
working
for
you,
so
we
watched
the
demo,
of
course,
and
it
didn't
work.
So
that's
cool,
but
I'll
fix
that,
and
you
know
let
you
know
later
on,
like
fiction
flag
or
something
cool
now.
The
third
one
which
is
heavily
under
active
development
is
the
container
as
a
function.
And
basically
you
know
you
can
give
it
a
container
image
to
fission
and
it
runs
for
you.
This
is
still
under
heavy
active
development.
A
So,
right
now
you
know
it
is.
It
is
going
to
go
heavy
change
and
you
know
more.
Your
features
are
going
to
get
introduced
and
watch
out
the
recommendation
for
this
one
as
as
we
update
cool
so
now
I
want
to
go
a
little
bit
in
you
know
deeper
as
to
how
the
environment
really
works.
Basically
right.
So
we
talked
about
that
code,
loading
and
stuff
like
that
right.
So
when
you
create
the
parse
right.
A
So
if
you
look
at
this
part,
actually
there
are
two
containers
running
here.
One
is
a
feature
container
all
right
and
then
there
is
actual
node
container
right.
You
see
that
right,
one
is
node
container.
The
other
is
a
feature
container.
What
is
happening
is
the
moment.
You
create
an
environment.
The
environment
container
starts,
which
is
the
language
time,
runtime
environment
like
node
or
go,
or
you
know
python
whatever.
A
There
is
a
simple
web
server
running
in
there
in
the
environment
container
on
the
side
there
is
a
fetcher
container,
and
this
feature
is
a
very
lightweight.
Go
line
container
super
super
thin.
You
know
it
doesn't,
doesn't
do
a
lot
of
competition
and
stuff,
but
this
feature
allows
you
to
talk
to
the
storage
service
official
to
get
the
code
for
a
function
when
there
is
a
request
for
the
function,
basically
right.
A
So
what
happens
is
when
the
request
comes?
A
question
first
goes,
and
you
know
talks
to
that
pod
and
tells
the
fetcher
hey.
There
is
a
request
for
function
abc.
Why?
Don't
you
go
and
get
the
code
for
function,
abc
from
storage
service
and
load
it
into
the
environment?
Container
right
and
the
environment
container
has
an
end
point
called
specialized,
which
basically
loads
this
container
into
the
memory.
A
So
if
it
goes
grabs
the
code
from
storage
service
places
around
the
file
system
at
a
certain
path
and
then
environment
container
simply
loads,
it
so
feature
basically
calls
the
environment's
endpoints
and
say:
hey
I'll
place
the
file
here.
Why
did
it
load
it
into
the
memory
pc
right?
So
fetcher
has
a
very
simple.
You
know
three
or
four
api
endpoints
one
is
called
fetch
which
basically
goes
and
fetches
the
code
of
the
function
from
a
specific
storage
service
for
a
given
function,
and
the
specialize
is
what
is
called
by
fission,
but
picture
in
turn.
A
Basically
simply
calls
environment
structure,
sorry
specialized
endpoint
and
tells
that
you
know
please
load
this
code
and
then
there
are
other
endpoints
which
do
not
talk
too
much
in
detail.
They
are
used
by
other
systems,
but
this
is
the
internal
working
of
how
the
loading
of
the
code
works
again.
As
a
user.
You
don't
need
to
know
absolutely
in
detail,
but
I
know
I
just
thought
I'll
kind
of
briefly
talk
about
it:
cool
any
questions.
On
this
so
far,.
A
Cool
now
we
are
talking
about
bunch
of
things.
Just
to
recap,
you
know,
for
some
of
the
folks
were
joined.
A
little
later
we
talked
about
environment
environment.
Is
your
base
run
time.
It
includes
the
base
os
and
some
dependencies
that
you
absolutely
need
in
the
environment
right.
A
The
package
is
a
function
code
and
dependencies
put
together,
so
you
can
say
I
have
you
know
this
code
and
then
I
need
x,
library,
y
library.
You
know
whatever
other
things.
A
function
basically
combines
your
environment
and
package,
and
then
it
has
some
runtime
configuration
some
runtime
strategies,
which
is
what
makes
the
function
function.
A
Then
you
have
triggers.
We
have
three
types
of
trigger.
The
stripy
trigger
the
mq
trigger
and
time
trigger,
sdp
trigger,
allows
you
to
call
functions
using
http
protocols
like
grpc
websocket,
even
plain,
vanilla,
http.
The
mqtrigger
allows
you
to
call
functions
when
there
is
either
a
message
in
the
message
queue
or
there
is
a
database
change.
You
know
of
data,
and
then
there
is
the
third
which
is
time
trigger,
so
it
calls
the
function
every
particular
time
interval.
You
know
whatever
that
sort
right
cool,
so
those
are
the
objects
that
we
talked
about.
A
Let's
go
look
at
very
briefly
the
entire
architecture
of
fiction.
Now
great,
so
you
have
the
environment
again
right
the
dependencies
plus
os
n
time.
You
have
the
package,
which
has
user
code
plus
additional
dependencies
right.
These
two
put
together
is
what
makes
function
within
function.
We
have
two
kinds
of
executors
the
pool
manager
which
maintains
a
warm
pool
of
parts,
and
it
has
some
cold
start,
because
what
it
does
is
when
the
request
comes,
it
goes
and
grabs
the
code
from
the
storage
service
loads
it
into
the
memory
and
then
executes
the
request.
A
So
the
time
it
takes
to
do
that
operation
is
the
delay
right
that
we
introduced,
but
there
are
trade-offs
that
we
talked
about
new
deploy,
starts
the
environment
and
the
feature
part
and
fetches
the
code
right
at
the
beginning
right,
so
it
doesn't
do
in
the
request
path.
So,
when
the
request
comes,
you
simply
reverse
proxy
and
it
responds
to
the
request.
A
These
all
operations
is
handled
by
a
service
called
executed
service
right,
so
executive
service
is
the
core
of
you
know
what
it
does.
All
this
crazy
stuff
of
managing
parts,
cleaning
up
parts
you
know
and
loading
and
all
that
stuff
right
executive
service
also
talks
to
something
called
as
builder
manager
service.
So
if
you
give
efficient
source
code
and
ask
fashion
to
build
for
you
or
you
know
fetch
dependencies
for
you,
then
the
builder
manager
service
does
that
and
we're
not
going
to
cover
that
a
whole
lot
of
detail
in
this
workshop.
A
A
For
time
triggered.
We
have
timer
service
for
stp
trigger,
we
have
a
router
service
and
then
for
mq
trigger.
We
have
you
know,
mq
trigger
service,
which
supports
a
whole
lot
of.
You
know
message
triggers
that
we'll
talk
about
later
and
then
there
is
controller
service
which
is
like
the
entry
point
or
api.
You
know
end
point
of
of
this
whole
efficient
thing
right.
So
that's
a
brief
overview
of
fission
architecture
and
all
the
components
that
we
talked
about
cool.
A
So
what
I'm
going
to
do
folks
is
I'm
going
to
take
a
one
minute:
break
gonna
go
grab
some
water
and
once
we
come
back
we'll
probably
catch
up
some
of
the
folks
on
their
you
know
like
set
up
and
stuff,
and
if
they
are,
you
know
needing
any
help
we
can.
We
can
talk
about
that
and
then
we'll
resume
in
so
right
now.
It's
eleven
eight
probably
resume
at
11
12.
is
that
okay
for
everybody,
three
four
minutes
break
yep.
A
I
can
pause
the
recording
and
then
we
can
resume
it
cool
great.
So
we
covered
the
fission
internals.
We
talked
about
all
the
concepts
the
functions
environments
package
triggers
and
then
we
talked
about
various
services.
You
know
which
do
all
this
stuff.
Right
again,
this
is
not
meant
to
be
a
super.
Exhaustive
detailed
overview
of
you
know
what
happens
inside
executor,
surveys
and
all
that
stuff.
A
A
Enterprise
or
real
world
application,
a
lot
of
applications
are
actually
even
driven
right.
So
you
know
something
happens.
You
need
to
send
an
email
to
somebody
something
happens.
You
know
you
need
to
do
something
right.
If
I
have
to
give
a
very
simple
example,
the
moment
you
place
an
order
onto
amazon.com
or
amazon,
or
you
know
very
ecommerce
website
for
the
sake
right,
a
couple
of
things
are
happening.
You
are
first
getting
an
email
confirmation
that
hey
your
order
has
been
placed
right.
A
Another
event
is
being
sent
to
the
warehouse
where
you
know
which
is
close
to
your
home
and
which
is
where
the
supply
is
available,
that
hey
this
is
a
new
customer's
order
and
we
need
to
ship
this
right.
There's
probably
another
confirmation
coming
in
from
the
back
of
the
payment
system
that
hey
payment
is
done
or
failed
or
whatever,
and
if
it
fails,
you
get
another
email
that
hey
your
payment,
failed
right.
So
everything,
if
you
think
of
it,
can
be
modeled
as
an
even
driven
system.
A
Right
so
even
a
happens,
something
you
know
like
two
things
might
be
called
on.
Each
of
those
two
subsequent
things
might
be
called
right.
So
that's
a
very
typical
way
to
build
a
lot
of
real
world
applications
for
it.
Right
and
patent
supports
these
through
something
called
as
mq
triggers,
and
we'll
talk
about
that.
You
know
in
a
moment
right.
So
if
I
talk
about
a
simple
example
here,
there
is
a
producer
function,
let's
say,
and
that
produces
the
message
and
drops
into
a
kafka
queue.
A
This
kafka
queue
is
being
listened
to
by
another
function
and
as
soon
as
there
is
a
message
here,
this
function
will
be
invoked
now.
This
function
does,
let's
say
a
bunch
of
processing,
and
if
it
succeeds,
it
wants
to
put
the
message
into
a
response
queue
so
that
you
know
it
can
say
this
is
done,
all
looks
good,
but
if
there
is
an
error,
it
wants
to
put
that
into
another
message:
queue
topic
after
topic,
called
error.
A
A
You
know
like
before:
release
1.10.0
we
had
traditional
like
mq
triggers
which
were
built
into
fiction
and
and
they
were
limited
to
kafka,
nats
and
azure
storage
queue
and
it
was
called
mqtt
kind
fission
or
they
were
like
no
type,
but
that
was
what
it's
called,
but
that
is
not
actually
maintained
anymore.
So,
if
you're,
you
know
using
anything
new,
I
would
suggest
use
the
one
on
the
right
hand,
side,
which
is
empty
kind,
kada.
Basically,
right
now
these
are
new
triggers
which
are
built
on
top
of
cada
project
and
cada
project
is
here.
A
If
you
want
to
go
and
check
it
out
and
we
use
scada
as
the
underlying
you
know
mechanism,
but
we
have
built
a
layer
on
top
of
that
and
currently
we
have
support
for
nats
kafka,
rabbit,
mq,
aws,
sqs,
aws,
kinesis,
gcp
pub
sub,
and
then
there
are
many
more
coming
right
and
these
are
the
ones
that
are
being
actively
developed.
B
Why
this
is
there?
Okay,.
A
Cool,
so
let's
assume
you
have
a
simple
go
function,
which
is
the
producer
function
right
and
it
drops
the
message
into
a
request.
Topic
kafka
request,
topic
right
now,
when
there
is
a
request,
a
message
in
the
request
topic:
you
want
something
to
be
triggered,
but
this
trigger
you
don't
want
to
be
always
running
right.
If
you
look
at
in
a
typical
world
today
we
run
a
service,
it
keeps
listening
to
a
message
queue
and
if
there
is
a
message
it
does
something
right,
but
you
don't
want
that.
A
So
what
eda
does
is
data
as
long
as
you
have
it
installed
and
have
it
configured
to
listen
to
a
specific
topic
or
a
message:
queue
or
data
source.
It
will
scale
the
pod
to
zero
right.
Only
when
there
is
a
message
or
a
you
know
signal
basically
right
incoming
signal.
It
will
scale
something
from
zero
to
one
and
that
something
here
in
our
case
is
scada
connector,
which
is
separate
project
in
its
own
right
and
can
be
used
outside
of
friction
as
well,
and
we'll
talk
about
that
in
the
contribution
section.
A
But
these
are
very
generic
connectors
which
do
certain
things.
So
in
this
case,
what
they
do
is
they
read
the
message
from
your
source
system,
the
database
or
message
queue
or
whatever,
and
then
they
call
an
http
endpoint
and
the
http
endpoint
in
our
case
happens
to
be
another
efficient
function
called
consumer
right.
A
A
It
can
simply
response
return,
a
200
response
and
the
message
body
and
it
will
go
to
kafka
topic,
called
response
topic
without
that
function
having
to
know
you
know
any
about
anything
about
kafka,
and
if
there
is
an
error,
you
return
a
non-200
response
as
a
stp
response
and
you're
in
the
body,
and
that
message
and
everything
will
go
to
the
error
topic
right.
A
A
Now,
if
you
look
at
the
mq
trigger's
definition,
as
we
looked
at
the
definition
of
you
know,
function
or
package
or
you
know
environment,
you
give
it
a
name.
You
tell
the
function
to
be
called.
If
there
is
something
in
that,
you
know
message:
qr
source
database
source
right,
then
you
say
mq
type,
kafka
and
mq
kind.
Cada.
As
I
said,
you
know
you
want
to
use
data,
then
you
provide
some
metadata
parameters
right
like
what
is
the
topic
to
watch?
What
is
the
response
topic?
What
is
the
error
topic?
A
A
So
that's
a
basic
definition,
we'll
again
look
at
this
in
detail
when
we
actually
go
and
try
hands-on,
but
I
just
wanted
to
bring
it
up
so
that
you
know
we
relate
to
what
we
you
know
talked
about
in
earlier
slides
cool.
So
we'll
talk
about
the
demo
slightly
later
and
we'll
come
our
cover,
one
section
before
that
right
now,
in
terms
of
time,
I
believe
we
are
running
ahead
of
time.
Let's
see.
A
A
Cool
now
so
far
we
have
been
using
command
line
to
create.
You
know:
the
fashion
objects,
the
function,
the
environments,
the
triggers
whatever
right
now.
This
is
not
ideal.
If
you
want
to
store
this
in
some
git
kind
of
you
know,
source
control,
system
right
and
that's
where
friction
specs
come
to
the
rescue.
They
are
basically
yammer-like
definitions
of
the
fission
objects
and
then
you
can
do
all
sort
of
ci
30
stuff.
A
You
can
also
create
function,
everything
definition
on
your
own
machine
and
then
check
it
into
a
source
code
and
then
apply
the
same
definition
to
developer,
qr
and
other
environments.
Basically
right
so
let's
go
and
try
it
out.
Let's
go
and
create
a
simple
package
and-
and
you
know,
a
simple
function
and
then
try
applying
all
that
stuff
right
cool,
so
I
would
wasting.
This
would
be
a
little.
A
A
A
A
Repo
fiction,
slash
examples
we'll
be
using
these
a
bunch
of
examples
and
samples
and
other
directories
during
the
hands-on
section
which
is
coming
up
next,
so
do
copy
this
repo,
if
you
haven't
so
that
you
can
follow
along
the
hands-on
part
of
it,
I'll
just
ping
that
in
the
chat
room
there,
you
go.
A
Next,
I'm
actually
going
to
go
and
create
the
fission
environment.
So
this
is
the
first
one
I'm
going
to
follow
right
now,
if
you
notice
one
thing
which
is
very
different
about
this
one
as
compared
to
other
commands
that
we
run
before
is
the
spec.
That
means
that
you're
not
going
to
actually
create
it
through
the
fission
server,
but
actually
only
gonna
create
a
declaration
locally.
A
A
A
Click
so
my
specs
are
created,
I'm
going
to
fission
spec
validate,
so
it
tells
me
there
is
one
function,
one
environment,
one
package,
no
triggers
nothing
else
right
and
I'm
going
to
apply.
I
still
haven't
created
anything
in
the
server
right.
So
if
you
go
to
actual
parts,
there's
nothing
running
right
now
up
to
their
intermitting
state.
So
that's
that's
all
there.
A
And
it
is
going
to
create
one
function,
one
environment,
one
package,
violation
again
done,
and
then
you
know
it
is
going
to
create
stuff
cool.
So,
as
you
can
see,
it
has
done
stuff
and
it
is
running
now
and
now
I
can
do
the
same
thing,
which
is
sufficient
function
test
which
we
did
earlier.
Nothing
fancy
there.
A
A
So
if
I
go
to
sample
workshop
and
go
to
spec
directory,
so
first
of
all
the
environment
definition
right,
it's
a
simple
same.
You
know
kubernetes
kind
of
format,
yaml
format,
you
have
a
kind,
you
have
api
version
method
and
then
respect
right
in
the
spec
we
specify
pool
size,
runtime
version
bunch
of
those
things
right
similarly
package.
A
It
specifies
the
name,
the
url
from
which
to
get
the
code
and
everything
of
that
star
right
and
in
this
function,
which
is
you
know,
referencing
the
environment
of
seeing
the
package
has
a
bunch
of
other
parameters
like
concurrency,
and
you
know
what
not
and
so
right
one
file
which
is
here,
which
is
not
a
crd
or
which
is
not
a
cr
from
kubernetes
point
of
view.
Is
this
deployment
config
file?
What
it
does
all
it
does?
A
B
B
A
A
Right
so
apply
done
with
watch
mode
right
when
pack
is
updated
watching
for
file
changes
right
now.
What
you
can
actually
do
is:
let's
go
and
okay.
One
interesting
thing
is:
I
think
we
referred
to
the
url
code,
so
let's
change
that.
Actually,
let's
change
it
to
a
local
file,
because
now
what
I
can
do
is
I
can
actually
change
the
code
on
my
local
system
and
then
the
moment
I
change
the
code.
It
is
going
to
watch
that
something
has
changed.
A
I
I
B
A
A
A
A
Efficient
workforce,
I'm
going
to
save
it
the
moment
I
save
it,
you
are.
This
thing
has
watched
the
there
is.
There
is
a
file
change,
it
has
noticed
and
the
apply
does
change
now.
If
I
call
it
again,
hopefully
I
get
a
response
in
a
minute:
the
new
one,
all
right
there
you
go
right.
So
this
is
like
the
development
you
know,
workflow,
where
you
can
keep
changing
code,
keep
watching
it
and
then
you
know
see
it
reflect
in
in
real
time
and
actually
test
out
the
function
with
the
new
one.
A
Basically
right,
so
that's
another
beauty
with
spec
that
you
can
actually
do.
Apart
from
checking
in
the
source
code
and
the
efficient
specs
into
the
github,
you
can
also
do
like
a
developer,
workflow
kind
of
thing,
with
fashion
spec
cool,
any
questions
so
far,
any
doubts,
do
you
guys
have
you
guys
been
following
along,
or
do
you
want
to
like?
Do
this
later?
The
kind
of
hands-on
thing?
How
do
you
do
it.
C
So
how
does
this
work
in
a
ci
environment
say
like
sir
jenkins,
so
I
would
write
my
code.
I
have
a
hello,
js
yep.
What
do
I
do
after
that?.
C
A
A
Okay,
okay:
now
there
is
one
feature
request
from
a
user
that
we
have
that
hey.
I
don't
want
to
use
fiction
cli
to
do
spec
apply
right.
Can
you
enable
helm
kind
of
scenario
where
you
can
say
the
package
on
your
environment?
Technically,
are
you
know,
custom
resources
of
committee,
so
you
can
actually
use
them
with
helm
as
well,
but
that
is
right
now
not
possible
like
as
of
today,
because
of
this
one
file.
A
So
we
are
trying
to
change
that
and
probably
in
a
month
or
two,
we
should
have
a
feature
where
you
can
package
whole
this
thing
as
a
helm,
you
know
card
and
simply
by
the
hand,
chart.
C
Yeah,
that
would
be
beneficial.
I
also
use
pulumi
to
bring
up
my
infrastructure,
so
I.
E
A
Yeah,
okay,
so
hand
chart
is
coming
and
there
is
other
user,
another
user
who
is
working
on
a
terraform
provider,
so
terraform
resources
will
also
come,
but
probably
you
know
one
or
two
releases
down
the
line.
Okay,
but
that's
a
great
question
nikhil
and
another
question
you
know
which
is
kind
of
related
to
what
you
asked.
Now
you
might
say
hey.
This
is
great
for
my
local
thing,
but
my
dev
configuration
versus
qa
versus
power
configuration
is
going
to
be
different
right.
A
How
do
I
maintain
that,
for
example,
right
so
right
now
our
spec
are
in
a
way
like
you
know,
I
would
say
they
don't
have
native
support.
You
can
use
something
like
customize
to
overwrite
some
fields,
but
again
that
is
another
feature.
You
know
that
will
probably
be
you
know,
introduced
in
a
couple
of
releases
where
you
can
say
environment,
node
definition,
and
then
you
can
have
configuration
specific
to
qa
or
stage
in
a
different
file
or
something
of
that
sort.
Basically,.
I
A
Cool
so
yeah
that
was
specs
and
and
the
developer
work
for
announcement
right
now.
Another
thing
I
want
to
introduce
now
is:
when
you
have
spec
what
would
be
nice
is.
You
can
also
do
a
lot
of
things
like
toleration,
stains
side,
cars,
init
containers
and
that's
where
the
pod
spec
feature
comes
in
picture
now.
Pod
spec
is
something
you
can
introduce
in
the
specs.
A
So
if
you
go
to
environment
node
here
in
the
spec,
you
can
add
a
new
field
here
in
runtime
called
power,
spec
right
and
any
field
which
is
part
of
the
pod
spec
declaration
right.
The
power
spec
actual
perfect
could
be
introduced
here.
So
you
could
add,
you
know
init
containers,
you
could
add
image,
pool
secrets,
you
could
add
volumes
and
bunch
of
stuff
so
that
all
is
possible
at
environment
level.
Today,
again,
a
lot
of
our
users
have
been
asking.
Can
we
introduce
this
at
function
level?
A
Because
really,
what
we
want
is
that
function
level
that
difference
of
you
know
which
volume
to
attach
and
all
the
stuff
one
underlying
challenge
there
is,
if
you
change
anything
disruptive
at
power
level
in
kubernetes,
you
have
to
restart
the
pod
and
other
than
gel
well
right
now,
but
that
is
something
you
know
another
feature
that
will
be
introduced
at
some
point
in
qt
so
and
prospect
I
just
mentioned
here.
A
I
I
just
did
a
brief
overview,
but
we'll
cover
a
little
more
in
hands-on
demo
when
we
actually
come
to
that
section
cool
all
right,
so
we
are
at
the
hands-on
section.
I
I
suggest
you
folks
do
follow
along
that
will
have
the
most
or
maximum
benefit
of
you
know
the
workshop,
but
definitely
you
want
to
do
it
later.
A
That's
cool
as
well,
and
you
can
always
you
know,
drop
on
to
the
fish
and
slag
and
ask
questions
like
if
you
get
stuck
anything,
I'm
gonna
follow
along
all
the
four
demos
you
know
with
you
and
at
least
apply
them
and
show
you
running
them
together.
B
A
Great,
so
the
first
demo
is
building
a
custom
environment
right
now
we
talked
about
environments
earlier
right
and,
let's
say
the
node.js
environment.
That
fashion
provides
is
not
good
enough
for
you.
What
you
want
is
no
headless
chrome
installed
in
in
node.js
environment,
because
you
want
to
do
some
scraping.
You
know
of
a
web
page
or
whatever,
so
you
want
to
build
custom
environment
for,
within
your
you
know,
enterprise
or
within
your
company's
use
disk
correct.
A
So
the
example
is
in
the
examples
repository
in
samples,
node.js,
chrome,
hello,
headless.
So
let
me
go
there
oops,
okay,
here
right
here,
the
environment
is
defined
like
this
right.
So
if
you
actually
go
to
fiction's
environment
of
repo,
let's
go
there
and
quickly
watch
it.
A
We
have
an
environment
called
nodejs
right
and
all
you
need
to
create
this
environment
is
two
simple
files.
One
is
a
docker
file.
Okay
and
there
is
a
server.js
file,
so
server.js
file,
most
likely
you
won't
change
a
whole
lot.
It's
a
pretty
simple
file,
app
search
like
not
more
than
a
few
hundred
lines
and
then
there's
docker
file
right,
but
for
our
requirements.
If
you
look
at
this
docker
file,
there's
no
headless
chrome
installed
right.
So
I
want
chrome,
headless
chrome
to
install
right.
A
So
what
I'm
gonna
do
is
I'm
gonna
take
this
servo.js
from
the
environment
file
and
in
my
docker
file,
I'm
adding
a
few
things
right,
I'm
adding
chromium,
I'm
adding!
You
know
free
type,
nss
bunch
of
other
dependencies
that
I
need.
I'm
also
adding
some
variables
called
puppeteering,
executive
path
or
whatever
of
that
right.
A
B
A
Copy
so.js,
that's
cool,
and
so
now
this
is
a
custom
environment
and
I'll
push
it
to
my
own
docker
repo
and
I'm
going
to
use
that
push
image.
Basically
right.
So
if
you
go
and
look
at
the
image,
I'm
using
vishal,
biani,
node
chrome
iphone
one
right
and
now,
let's
look
at
the
second
part
of
the
demo,
which
is
using
this
custom
environment
to
actually
run
headless
chrome
within
a
pod,
and
then
you
know
give
us
some
resemblance
right.
A
So
if
you
look
at
that
part
in
the
hello.js,
I
have
a
simple
code.
So
we
of
course
follow
the
declaration
of
the
method
which
is
essing,
function,
right
and
and
more.exports
within.
This
is
what
we
write:
the
actual
function
code
right.
So,
first
of
all,
we
are
using.
We
are
using
url.parse
method
to
get
the
url
from
the
context
right
so
context.request.url,
so
I'll
pass
the
url
as
part
of
my
request.
A
When
I
call
this
function,
I'm
getting
the
query
part
of
it
from
your
parts
and
then
I'm
creating
a
browser,
and
then
I'm
saying
puppeter.launch
and
using
chromium
paths
or
chromium
binary
and
then
actually
doing
a
browser.page
and
using
google.com
as
the
page
to
be
browsed
and
then
simply
responding
the
content
of
that
page
to
the
to
the
you
know,
caller
of
the
functionality
right
and
since
I'm
using
this
puppeteer
library,
I
also
have
package.json,
which
says:
load,
puppeteer
and
and
the
puppeteer
library
right.
A
A
I
A
A
Like
apply
now,
this
package
is
going
to
need
building.
So
if
you
go
and
watch
the
fission
package
list,
you're
actually
going
to
see
it
as
running
status
right
because
it's
not
a
single
file,
it
needs
to
do
some
stuff
and
it
creates
a
part.
It
does.
You
know
downloads
dependencies
packages,
everything
up
and
eventually
it'll
learn
to
return
to
succeeded
status
once
it
is
done,
but
it
needs
building
within
the
cluster.
A
A
All
right,
as
you
can
see,
the
response
is
the
whole
html
page
of
google.com
right,
which
is
what
the
expected
output
was
wow.
That's
a
pretty
big
html
for
a
simple
google
search
page,
let's
go
at
the
top,
so
much
of
javascript
and
stuff
right.
So
what
I
did
is
I
built
a
custom
environment
using
headless
chrome
on
top
of
the
nodejs
environment.
That
fashion
had
then
I
hardware
the
url
to
call
google.com
within
the
function,
and
I
call
that
function.
A
Basically,
it
went
run,
headless,
chrome
and
use
the
puppeteer
library
within
node.js
to
go
and
query
google.com
and
give
me
the
response
of
that
right,
and
there
is
actually
a
real
use
case
for
that
right.
A
lot
of
people
do
web
scraping
of
various
websites.
They
want
to
use
functions
for
these
kinds
of
networks.
So
it's
a
html
page.
A
There
you
go
so
that
was
a
script
page
using
puppeteer
and
you
know
headless
chrome,
oh,
it
used
some
other
language,
I'm
running
my
server
in
I'm
running
my
server
using
the
cvo
cloud,
the
google,
sorry,
the
sibo
clouds
k3s
or
you
know,
kubernetes
servers.
A
A
Great
so
that
was
a
simple
custom
environment
building
and
then
actually
building
the
source
code
within
the
question
cluster
and
then
using
that
package
to
use
a
function
right.
Any
questions,
any
doubts,
please
feel
free
to.
A
A
A
C
I
A
Yeah,
so
to
wait
for
it
to
be
succeeded.
So
that
is
one
feature
again.
You
know
like
the
function
should
wait
till
the
build
is
succeeded
and
now
to
give
you
like
a
finite,
it
should
say
something
like
the
package
is
not
ready,
but
that's
that's
another
feature
coming
soon.
D
B
A
A
B
C
A
Wow,
can
you
do
me
a
favor
go
and
look
at
there's
a
pod
in
official
builder
name,
space.
I
F
A
F
A
Okay,
can
you
do
a
cryptic
get
pod
in
official
builder
namespace.
F
Yeah,
it's
working
for
me.
So
the
reason
why
it
failed
for
me
and
first
was
the
package
was
still
in
running
and
as
soon
as
it
succeeded,
I
got
a
response.
C
A
A
Super
interesting,
and
can
you
try
to
get
the
logs
of
there-
is
a
builder
manager
service
in
the
official
name
space,
so
nikhil.
B
I
A
Yeah,
so
you
see
the
storage
services
pending
right,
the
last,
but
one
so
what
happens
is
the
builder
manager
takes?
Your
source
code
builds
the
package
and
then
it
uploads
to
storage
service,
and
then
the
storage
service
you
know,
will
be
used
by
the
function
to
retrieve
it.
Since
you
don't
have
a
volume
on
your
setup,
it
is
not
provisional
right.
That's
why
zero
off
one?
No,
it's
pending
state!
C
A
So
just
give
me
one
second
I'll
pull
up
the
flag
and
then
you
can
fix
it,
so
you
can
do
helm
upgrade
session.
You
are
installing
hemoly
right.
B
Yes,
okay,
so
just
one
second
being
here
in
the
chat
window.
B
I
A
You
don't
need
to
give
you
need
to
give
minus
minus
name.
I
guess
for
efficient.
B
A
Is
helm
just
upgrade
right
or
I
haven't,
have
an.
A
C
I
don't
want
to
block
anyone
I'll
try
to
update
it.
You
can.
A
A
A
Cool,
so
third
demo
is
to
use
specs
to
schedule
functions
and
what
we
are
doing
here
is
we
are
attending
one
node
and
then
we
are
adding
the
toleration
in
the
in
the
power
spec
of
the
function
or
environment
in
this
case,
so
that
that
that
function
is
able
to
scroll
down
that
node.
Now
this
demo
works
great.
If
you
have
kind
cluster,
because
there
is
only
one
node
and
if
that
node
is
tempted,
no
other
part
can
get
scheduled
unless
it
has
a
toleration
right.
A
But
in
case
of
my
case
we
have
three
nodes,
so
it
is
very
possible
that
the
the
function
gets
scheduled
on
any
other
node,
so
to
speak
right.
So
what
we
will
do,
then
is
we
will
change
the
demo
slightly
and
let's
see
how
we
can
tweak
that,
so
this
is
in
the
sample
spot
spec
example
directory
follow
along
again,
so
I
go
to
pod.
Speak
example
spec,
and
this
is
the
need,
a
builder
service,
so
you
can
actually
use
it
without
any
problems.
A
If
you
go
to
environment
definition,
you
will
see
part
spec
here
within
the
runtime
right
and
within
perspective
operations.
But
it's
saying
the
key
reservation
equals
friction
is
only
the
nodes.
You
know
it
can
tolerate
right
and
if
I
go
to
a
readme
of
the
directory,
we
are
tainting
a
node
right,
so
we
can
paint
a
node.
So
let's
actually
do
this,
you
could
will
get
nodes
okay
and
I'm
going
to
attend
all
three
nodes.
So
if
you
have
three
nodes,
please
do
tend
all
three
nodes
with
reservation
is
equal
to
fission,
no
schedule
right.
A
B
A
A
A
A
A
I
believe
some
of
the
validations
that
we
added
recently
because
this
bug
is
my
hunch.
I
A
Okay,
that
worked
great
now,
I
can
apply
again.
Hopefully
that
works.
Let's
see,
okay,
now,
as
you
can
see,
the
part
which
was
spending
earlier
were
destroyed.
Of
course,
a
new
one
was
created
and
it
got
scheduled
without
any
issues
right
and
I
can
actually
test
it
out
as
well.
Friction
refine
test,
nothing,
fancy
of
the
output.
Of
course,
the
same
old
output
will
get.
A
What
is
the
function?
Name.
B
A
So
I
just
showed
you
the
use
of
stains
and
toleration
to
show
that
you
can
use
tolerations
within
functions
as
well
for
tinted
nodes
and
then
accordingly
schedule
now.
What
is
the
use
case
for
this
right?
I'll
give
an
example.
If
I
have
a
cluster
where
I
have
let's
say
some
machine
learning
workload
and
it
needs
a
gpu
right,
so
some
functions
should
only
get
scheduled
on
the
nodes
where
there
is
gpu
and
otherwise
it
won't
work,
basically
right.
A
So
what
I'll
do
is
I'll
taint
all
the
nodes
with
gpu
with
some
paint,
and
only
the
functions
which
are
running
machine
learning,
work
workloads
is
the
only
one
to
which
I'll
add
toleration
so
that
they
go
on
that
node.
Other
functions
can
go
on
to
other
nodes
right.
So
that's
a
typical
standard
use
case
now
I'll
remove
the
node
paint,
otherwise
we'll
run
into
error
into
other.
You
know
next
example.
B
B
B
A
A
F
How
did
you
solve
that?
I
missed
that,
but
how
did
you
solve
that
error
for
node.js.
A
Oh
okay,
so
what
has
happened?
Is
we
have
added
open
api
schema
validation
in
the
last
release,
yeah
and
containers?
Being
a
list
needs
to
be
declared,
at
least
even
if
it
is
empty.
A
This
line
is
what
added
the
21
line
I'll
make
a
pr
for
this
as
well,
and
we
have
to
probably
raise
either
documentation
pr
or
a
bug
for
this,
but
yeah
cool.
F
I
A
Cool,
so
that
was
strange
intelligence
now.
The
next
example,
which
is
the
rabbit
mq
database
integration
now
for
this
example
to
work.
You
ought
to
install
cada
okay.
So
in
my
case
I
do
have
cada
installed.
A
It
is
in
data
namespace
right,
it
is
running
our
stuff
and
you
also
need
to
make
sure
if
you
are
doing
a
release
before
1.12.0
your
health
values
have
amputee
enabled
is
equal
to
true
computing
knives.
Let
me
just
show
you
what
I
mean.
A
So
like
this
promise
is
enabled
false,
you
should
have
mqtt
kdi,
enabled
true
if
you're,
using
1.12.0
or
previous
release,
if
you're
using
1.13.1.
A
B
A
By
default,
that
is
the
case,
but
if
you're
using
anything
one
or
two
dot
zero
or
previously,
this
has
to
be
true-
it
is
by
default.
False
in
those
releases
makes
sense.
Now,
let's
go
and
pick
up.
That
example.
So
talk
to
our
cadillac
mq
right
now
before
I
go
there
if
you're
using
client
or
any
local
cluster.
A
Just
do
the
ribbed
mq
parts,
and
I
can
tell
you
what
the
changes
are,
because
it'll
get
pretty
heavy,
but
if
you're
using
cloud-based
cluster,
you
can
do
kafka
and
rabbit
empty
both
parts,
but
of
course
that
means
you'll
have
to
install
kafka.
You
have
to
install
drive
at
mq
and
then
come
to
this
piece
right,
so
I'm
gonna
only
do
the
rival
mq
part
so
that
you
also
see
what
changes
I
make
and
it
will
also
probably
take
you
know
slightly
less
effort
for
us
to
get
like
you
know
everything
installed.
A
So
if
I
go
to
readme
again,
all
the
documentation
is
done
there
as
to
how
to
install
you
know:
data
kafka
and
rapid
engine
stuff.
So
I'm
going
to
first
install
dividend
view
so
for
installing
the
mq
I'm
going
to
use
the
crew
plugin
of
cube
cuddle
to
install
that
plugin.
Probably
already
is
installed
in
my
case,
but
I'm
going
to
create
a
name
space
called
drivermq
and
then
create
operator
for
hybrid
queue
in
that
namespace
and
finally,
going
to
create
an
ibmq
instance.
B
A
Crystal
russia,
probably
I
have
to
wait.
Okay,
that's.
A
A
A
Ravi
mq
within
the
samples
directory
and
readme,
has
all
the
instructions
you
can
do
the
entire
demo
of
kafka
and
keda
later,
and
I
did
a
video
on
that
last
two
last
weeks
on
16th
or
17th
of
jan
as
part
of
the
cncf
webinar,
you
can
go
and
check
out
the
cnc
webinar,
which
does
exactly
this
thing.
I
A
Much
that's
the
video
watch,
your
trader,
please,
let's
see
rabbidm
q
is
up
or
not.
Okay,
it
is
running
great.
We
have
one
problem
solved
so
far,
let's
go
to
the
sample
again
now,
if
you
look
at
this
entire
thing
right
now,
actually
let
me
open
that
video
up
might
be
actually
good
to
show
the
diagram
which
I
was
talking.
A
A
Okay,
this
is
what
is
being
demoned.
One
function
produces
messages
puts
into
a
kafka
topic.
There
is
another
function,
listening
to
it
and
it
listens
on
the
request.
Topic
gets
a
message:
it's
able
to
not
able
to
process
it
puts
into
error
topic.
If
it
is
able
to
process
fine,
it
is
going
to
put
into
response
topic
on
the
response
topic.
There
is
another
function
listening
in
as
soon
as
there
is
a
message
it
kicks
in
the
data
thing
and
and
calls
this
function
too.
A
That
drops
the
message
into
rabbitmq
and
that
again,
another
function
is
listening
onto
it.
It
gets
invoked
right.
We
are
only
going
to
do
this
last
part
of
it.
We
are
going
to
simulate
the
function
by
you
know,
creating
a
body
or
sending
value
to
that.
You
know
whatever
function,
producer
function
of
rabbit,
mq,
we'll
watch
it
right
if
you're
getting
a
message
and
we'll
watch
another
function
getting
invoked
as
soon
as
there
is
a
message
in
rabbit,
mq,
we'll
also
look
at
the
auto
scaling
part
here
right,
the
auto
scaling
part
of
cada.
A
A
A
A
Thanks
cool
good
point,
I
didn't
realize
this
is
how
your
accommodation
works
right.
You
miss
a
few
things
and
then
you
realize,
when
you
do,
somebody
else
tries
it
good,
so
rapid,
empty
producer
right,
it's
a
simple
golang
function
again.
It
follows
a
certain
syntax,
because
you
know
it
has
to
like
follow
the
efficient
kind
of
method.
Declaration
of
function
handler
gets
a
request
in
the
in
the
request
in
the
function
and
it
has
to
return
a
response.
A
Basically
right,
it
gets
a
bunch
of
environment
variables
using
using
your
you
know,
environment
variables
from
the
from
the
function
spec
we'll
look
at.
We
have
to
change
those
as
well,
because
I
think
the
username
and
password
will
change
for
sure.
If
not
host
and
port,
it
dials
the
amqp
port
of
that
rabbitmq
queue,
and
then
it
creates
a
channel
and
stuff
which
is
like
a
very
specific
to
rapid
mq
kind
of
code.
It
declares
a
queue
called
publisher
and
then
eventually
it
writes
the
body
to
this
publisher
thing
right.
B
A
A
A
A
Yep
cool,
so
our
change
is
line,
number
56
change
from
request
body
to
a
static
string
and
then
remove
the
corresponding
line
at
around
41.42.
So
this
particular
function
is
good.
Now,
right,
so
producer
function
is
relating
to
rabbit
mq
and
the
dependency.
Of
course,
you'll
need
to
use
the
mkmq,
amqp
library
and
stuff.
So
all
that
is
stuff
there.
A
Now,
let's
go
and
configure
the
rabid,
mq
producer
functions,
environment,
variables,
right
so
function
producer
or
actually
I
will
do
this
environment
golang
because
that's
where
all
the
variables
are
there
right.
So
my
rabbitmq
address
is
same
because
I
have
installed
in
the
ribbon.
Mq
namespace
and
name
is
same
other
stuff.
So
if
you
go
and
verify
that
kgsvc
in
rabbidmq
name
space,
there
is
a
rabbit,
enqueue
service.
So
we
are
good.
A
B
A
Please
don't
hack,
my
rabbitmq
now
that
you
know
passwords
cool,
that's
it,
but
I
also
want
to
open
the
dashboard
of
rabbitmq,
so
we
can
actually
watch
it.
Let's
see
under
command
for
that
from
the
new
plugin
of
rabbit
mk.
I
really
like
this
plugin
of
rabbitmq
of
you.
Next,
using
rabbidmq
is
so
much
more
easier
cool,
so
it
has
done
a
forwarding
and
proxy
and
everything-
and
you
know
all
that
stuff
minimize
this
window.
A
A
Cool
and
yeah
you
can
see
there
is
a
bunch
of
stuff
here.
If
you
go
to
queues,
there
is
no
queue
right
now.
I
believe
it
will
be
created
once
we
have
a
producer
function
and
it
will,
it
will
see
the
flow
of
messages
and
stuff.
So
that's
traveling
dashboard,
so
we
can
watch
when
we
actually
invoke
the
function
great.
A
E
A
Okay,
we
haven't
actually
applied
anything.
So
of
course
we
don't
see
anything
right
now
cool,
so
I'm
going
to
do
fission
spec
apply.
Now,
of
course,
we're
going
to
create
the
function
for
kafka
stuff,
they'll
fail.
I
don't
care
about
that.
I
just
care
about
my
arabid
function,
the
producer
one
consumer,
one
and
we'll
look
at
that
right,
so
rounded
friction
pick
apply.
A
It
has
four
functions:
two
environments,
four
packages
and
three
message:
queue
triggers
right
as
we
saw
in
the
diagram.
Basically,
we
are
only
interested
in
two
functions
and
one
trigger
will
look
at
that
now.
As
soon
as
you
did
this,
you
remember,
I
did
this
q
get
deployed
in
minus
default
name
space.
There
was
nothing,
but
if
I
run
it
now,
you
will
see
one
or
three
deployments
right:
k2k
is
for
kafka
to
kafka.
Triggers
r2f
is
for
rabid,
mq2
function
and
then
k2
r
is
for
kafka
to
grab
mq.
You
know
right.
A
A
To
zero,
similarly,
if
you
go
and
do
hpa,
there's
no
hpa
here
that
should
be
an
hpa.
A
So
what
happened
is
when
we
created
the
trigger
three
of
the
triggers
right,
three
triggers
we
created
for
each
of
the
triggers.
We
asked
keda,
or
we
actually
created
three
scalars
of
cada,
so
scalars
with
this
concept
in
cada,
which
basically
scales
things
back
to
zero.
If
there
is
nothing
happening
right
and
as
you
can
see,
it
has
greater
scalars
or
corresponding
deployments,
and
it
has
scaled
it
back
to
zero.
A
B
A
I
remember
I
had
worked
on
this
with
one.0
without
any
issues
to
see
if
anything
has
changed
in
1.13,
not
one,
but
let
us
see,
let's.
A
A
I
can
always
try
to
downgrade
to
one
or
twelve
zero
and
see
if
it
is
still,
let
me
see,
add
but
may
not
go.
This
is
158,
so
let
me
go
to
get
a
connector
and
see
what's
on.
B
A
A
A
This
should
have
worked.
Let
me
do
one
thing.
Let
me
do,
let
me
remove
the
power
once
I'm
not
sure
if
it's
going
to
help,
but
it
does
so
maybe
when
we
created
the
pod
ribbon
is
already
running
so
it
should
have
been
able
to
reach.
A
Mq,
we'll
take
another
five
minutes
to
debug
this.
If
it
doesn't
solve,
then
we'll
have
skip
for
the
demo.
For
what
I
mean,
but
same
thing
did
work
here.
So
I'll
show
you
the
video
in
the
worst
case,
but
I
really
I
would
like
to
show
you
the
real
thing.
B
A
Functional
metadata,
it's
giving
it
the
username
password
everything
looks
good
here.
I
can
try
to
change
my
version
of
idm2
connector.
Let
me
see
which
one.
A
A
B
B
A
A
A
Okay
still
failed;
okay,
I'm
gonna
one
more
hacky
thing
now.
If
this
one
works,
the
last
attempt
you
will
see
there
is
mq
trigger
cada
service
and
if
I
do
a
kg
deploy
of
that-
and
if
I
look
at
that,
if
I
do
kg
edit
deploy.
A
A
B
That
our
this
thing
has
worked.
Fine,
let's
try
it
again.
A
A
A
The
scale
of
that
data
connected
part
goes
from
zero
to
one
and
if
there
are
more
messages
coming
in
can
scale
out
from
one
two
more
and
to
be
clear
in
that
picture
that
we
talked
about
right
here.
Connector
is
sitting
somewhere
here.
Basically
right
so
sorry
here
there
is
a
message
in
the
message:
queue
the
cada
connector
part
goes
from
zero
to
one
so
in
between
the
topic
and
the
function
that
needs
to
be
called
and
if
there
are
no
messages
in
the
topic,
this
scales
back
to
zero.
That
is
the
point
cool.
A
A
Cool
so
we're
at
12
30..
I
would
like
to
take
a
small
four
minute
break,
we'll
be
back
at
12
30..
I
think
what
we
now
cover
mostly
is
controlling
to
fission
and
some
use
cases
and
some
general
stuff,
mostly
theoretical
stuff,
but
also
you
know
good
to
know
how
people
are
using
fish
and
stuff.
A
I
think
we
will
not
need
more
than
40-45
minutes
to
cover
both
of
them
together.
It
will
be
done
by
like
one
fifteen
ish.
So
let's
do
a
quick,
three
four
minute
break
and
and
let's
meet
at
12
30
yeah.
A
All
right
so
so
contributing
to
fission.
How
do
you
contribute,
of
course,
concluding
docs
is
probably
a
good
way
to
start.
I
joined
the
slack
happy
to
help.
You
know
between
me,
gaurav,
a
bunch
of
other
folks,
a
lot
of
community
folks
as
well
do
respond
a
couple
of
users.
You
know
from
japan.
India
also
do
respond
actively
pretty
now.
There
are
three
areas
I
would
say
so.
First
thing
is:
if
you're
absolute
beginner
to
open
source
and
contributing
in
general,
I
would
start
with
keta
connectors.
A
They
don't,
you
know,
need
you
to
know
about
kubernetes
programming
about
the
general
efficient
programming
so
much
it
basically,
you
know
follows
a
certain
contract
which
is
documented
on
the
undercater
connector,
and
you
can
look
at
any
one
sample
example
and
as
long
as
you
follow
that
interface,
that
contract
as
to
what
environments
they're
able
to
pass
and
what
to
do
it's
a
fairly
simple
thing
to
write
today
we
have
about
six
order
connectors,
but
if
you
go
to
kira
website
right,
for
example,.
I
A
There
are
something
called
scalars
and
for
every
scalar
in
cada
we
can
write
a
connector
for
fission.
So
if
you
can
look
at
it,
they're,
probably
what
40
arc
scalar
they
have.
You
can
write
a
connector
for
each
of
them
in
the
efficient
connector
and
a
simple
single
connector
is
not
too
big.
If
you
look
at,
for
example,
let's
go
and
look
at
rabbit,
mq,
the
one
that
we
were
just
trying
to
deal
with.
A
There
is
only
one
like
simple
golang
file.
Hardly
200
lines
could
be.
More
could
be
less.
You
know
based
on
the
topic,
so
you
know
you
can
get
fairly
easily
started
and
start
contributing.
You'll
get
your
feet
wet
and
you
know
also,
you
know,
start
contributing
right.
So
that's
the
easiest
place
to
start.
I
would
say
so:
that's
that's
perfect
for
beginners,
if
you're
slightly
medium,
you
know
kind
of
contributor
have
done
a
few
contributions
here
and
there
want
to
you
know
level
up
a
little
bit.
A
Environments
are
another
phase,
so
environments
are
nothing
but
the
language
specific
runtimes,
so
you
can
contribute
or
enhance
existing
environments
or
you
can
bring
in
new
runtimes
like
right.
Now
we
are
looking
for
rust
as
an
example.
Maybe
you
know
and
other
languages
you
know
and
stuff
like
that,
so
you
can
contribute
there
as
well.
A
The
third,
of
course,
is
the
coefficient.
It
requires
some
understanding
of
kubernetes
controllers
to
some
degree,
but
even
then
I
would
search
you
know
for
good.
First
issues,
look
at
documentation,
look
at
blogs
and
pick
up
one
of
the
simplest
issues
you
know
from
there
and
if
you
haven't
done
any
kubernetes
programming
around,
you
know
kubernetes
controllers.
I
would
start
with
this
book
it's
a
great
book
by
michael
hansplus.
If
you
follow
it
cover
to
cover,
you
would
have
a
pretty
good
understanding
of
how
to
do
kubernetes
controller.
A
A
That's
a
great
workshop
as
well
to
learn
how
to
start
contributing
to
you
know
an
equivalence
project
or
project
on
top
of
kubernetes.
Basically
right,
the
fundamental
concepts
remain
same
cr
controllers,
you
know
and
all
that
stuff
operators.
Basically,
but
this
book
is
like
absolutely
the
gold
standard
with
you
if
you
want
to
start
learning
about
contributing
to
any
kubernetes
or
kubernetes
based
project
like
controllers
operators
and
those
kind
of
things,
basically
right,
so
that's
about
contributing.
A
Now,
if
you
go
to
efficient
core,
let
me
go
to
which
encode
code
base
and
if
you
want
to
contribute,
it's
simple.
There
is
a
tool
called
scaffold
and
if
you
do
scaffold,
run
it'll
actually
run
like
build
images
locally.
You
push
it
to.
You,
know
your
repository
and
then
deploy
that
piece
of
code
onto
the
cluster
that
you're
targeting
right.
So
your
workflow
will
be
typically
go
and
change
some
code
in
fashion
code
ways
you
know
deploy
onto
kubernetes
server
and
test
it.
A
A
A
B
A
B
B
A
All
right
any
questions
on
contributing
you
have,
please
don't
hesitate
to
drop
on
to
fiction
slack.
You
can
find
the
phishing
flag
here.
A
Ashibot
says
yeah:
does
writing
blog
also
count
as
contributions?
Absolutely
you
know,
happy
to
you
know,
have
blogs
also.
They
are
probably
one
of
the
greatest
ways
to
contribute
and
easiest
ways.
Also
to
start
so
definitely
happy
to
you
know
accept
those
as
a
and
and
blog
is
like.
The
blog
repo
is
again
on
the
on
the
github.
You
know
organization
at
blog.fishnet.io
and
you
can
definitely
contribute
there
as
well.
It
is
a
simple
hugo
website
and
and
yeah
you
can
write
in
markdown
and
that's
that's
about
it
great.
A
So
I
can
do
this
demo
right
now,
because
I
think
our
attempts
at
making
gravity
mq
didn't
succeed.
So
actually
let
me
show
you
how
to
build
and
deploy
fiction.
Maybe
we
have
a
fictional
repo.
Let's
assume
I
have
made
some
change,
but
I
haven't
right
now,
but
let
me
still
oh,
I
want
to
start
docker
on
my
local
machine
and
I
hope
that
doesn't
hang
up
everything
else.
A
Okay,
let's
come
back
to
it;
hopefully
after
it
august
starts
up
we'll
come
back.
Let's
cover
some
slides.
A
Cool
so
now
one
of
the
common
use
cases
with
functions
or
any
microservice
in
general
is
workflow
right.
You
want
to
chain
things
you
want
to.
You
know
a
happens
that
should
invoke
thing
b,
of
course,
between
in
between
there
might
be
message,
queue
that
invokes
things
c
and
d,
and
so
on
and
so
forth
right.
So
the
way
we
looked
at
that
video
onto
the.
A
B
A
Yeah,
so
workflows
is
a
very
common
use
case.
We
are
gonna,
add
support
for
workflows
using
you
know
some
of
the
other
projects
sometime
this
year
for
sure,
and
we
do
have
another
project
called
fission
workflows,
but
that
is
not
actively
maintained,
so
fission
workflows
was
the
way
to
do
workflows
a
while
back,
but
friction
workflow
is
the
defined
project
right
now
and
are
not
actively
maintained
anymore.
These
days.
A
The
next
is
multi-tenancy
a
lot
of
people
ask
and
actually
a
lot
of
people
use
it.
That
way
is
that
multiple
tenants
are
using
the
same
facial
platform
and
they
want
isolation
between
functions
or
environments
and
stuff
like
that,
and
there
are
like
various
media
of
use
cases
you
know
between
those
combinations.
A
Multi-Tenancy
does
exist
today
in
fission
and
actually
use
customers
do
use
multi-tenancy,
but
there
is
some
work
to
be
done.
There
are
some
like
sharp
edges
or
rough
edges.
If
you
have
to
say-
and
that
is
again,
you
know
something
to
be
picked
up
in
the
road
map
in
the
next.
You
know
few
months
we
see
now
coming
to
use
cases.
A
Let's
talk
about
telecom
company,
they
are
using
fashion
as
a
platform
for
almost
year
and
a
half
you
know
for
their
customers,
it's
a
telecom
csp,
actually
like
a
communication
service
provider
and
and
they
do
provide
their
software
to
multiple
telecom
companies
and
what
they
want
is
anytime.
A
There
is
a
issue
in
let's
say
one
of
their
servers
right
that
creates
a
new
service
request
in
their
platform
and
instead
of
routing
that
service
request
to
a
person,
it
actually
goes
to
a
kafka
message
queue
and
from
there
picked
up
by
efficient,
workflow
and
efficient
workflow
is
a
fairly
complex
set
of
workflows
and
and
based
on
one
function's
execution.
Next
step
is
decided,
so
it's
like
a
dynamic
workflow.
A
If
you
have
to
call
it
that
ways
and
the
dynamic
workflow
they
are
built
using
their
own
custom
logic,
but
the
actual
units
of
function
execution,
they
use
fission
pretty
heavily
and
what
happens
is
that
function
goes
and
queries.
The
you
know
server
looks
at
the
resources.
There's
some.
You
know
basic
checks
and
stuff
yeah.
That's
all
the
information,
and
once
the
information
is
gathered,
it
is
run
through
a
simple
like
data
analytics
or
ml
kind
of
model
and
based
on
the
results
of
the
model
or
the
data
analytics.
A
You
know
result
it
actually
goes
and
fixes
the
server
itself
or
it
invokes.
A
person
to
you
know
come
and
help
in
basically
right
so
think
of
like
a
auto
healing
system,
where
a
phishing
does
a
bunch
of
other.
You
know
initial
diagnostics
gathers
information
finds
out
what
is
wrong
with
that
server.
Why
cpu
is
high
and
stuff
like
that
and
gives
you
that
information
to
the
human
or
tries
to
fix
it
on
its
own?
If
it
is
confident
enough
that
no,
that
is
a
problem
bc
right?
A
So
this
whole
entire
workflow
is
built
using
fission
and
it
is
being
used
across
multiple
telecom
companies.
You
know,
in
almost
you
know,
production
and
then
semi-production
environments
as
well.
A
Another
customer
is
using
it
for
a
web
scripting
platform.
So
what
they
do
is
as
a
web
scraping
analyst,
there
is
a
ui
and
they
enter
a
bunch
of
queries
and
that
query
translates
into
hundreds
of
functions.
In
the
background
and
each
of
those
hundreds
of
functions,
you
know,
go
and
scrape
websites
go
and
scrape.
You
know
like
portals
and
stuff
and
gather
the
result,
and
then
that
result
is
gathered
and
shown
in
a
ui
and
they've
been
running
in
production
for
almost
two
years.
A
They
have
been
doing
almost
thousand
requests
per
minute
of
fiction
functions
and
recently
we
did
exercise
for
them
to
scale
that
out
to
5000
requests
per
minute
and
and
they
have
planned
to
scale
out
up
to
100k
rpm
sometime
by
end
of
2021
early
2022,
basically,
and
when
we
did
this
scaling
exercise
for
them
last
year,
it
threw
up
a
bunch
of
interesting
problems
right,
so
they
don't
have
a
prediction
of
you
know
when
the
request
will
rise
from
zero
to
something
like
two
three
four
thousand
rpm
right,
and
that
means,
if
you
use
aws
load
balancer
in
between
it,
won't
scale
fast
enough.
A
The
second
problem
we
ran
into
was
kubernetes,
wasn't
able
to
spin
up,
you
know,
powers
fast
enough,
sometimes
and,
and
that
has
its
own.
You
know
challenges
basically,
but
finally,
we
were
able
to
scale
up
to
7k
rpm
in
our
test
environment.
Without
you
know,
with
some
architectural
changes
and
a
success
rate
of
you
know
close
to
97
and
above
so
that's
that's
a
web
scripting
platform,
one
of
the
other
world's
top
ten
firms.
I
believe
it
also
is
top
five
right
now.
A
Maybe
they
are
using
fashion
as
a
back
end
for
a
security
platform.
Think
of
it
like
a
load
code,
you
know
platform
so
in
the
ui,
the
security
researchers
write
the
security
code
and
that
goes
and
gets
executed
into
a
bunch
of
functions
which
eventually
go
and
call
either
websites,
or
you
know,
servers
to
run
that
execution
security
code.
A
Basically,
there
are
many
more
so,
for
example,
like
spotify
is
right
now
you
know
evaluating
and
trying
to
switch
from
lambda
functions
to
fashion
and
integrates
with
sqs
and
bunch
of
other.
You
know
message:
queues
and
stuff.
There's
another
telecom
company
evaluating
fiction
for
some
of
their.
You
know
on-prem,
as
well
as
on
cloud
workloads.
A
This
japan-based
ai
startup.
They
have
been
using
fission
for
their
ml
workloads,
they're
a
singapore-based
data
analytics
company.
They
actually
run
a
lot
of
analytics
on
facial
functions
and
kafka
as
a
primary.
You
know
backbone
for
the
message
queue,
so
you
know
that's
kind
of
the
brief
of
the
workshop
and
you
know
happy
to
answer
any
questions
dive
into
specific
areas.
I
know
we
are
done
ahead
of
the
time,
so
you
know
we
can
probably
spend
half
an
hour
if
there
is
any
specific
area.
You
know
talk
about,
discuss,
ask.
A
Okay,
so
right
now
we
don't
have
like
we
haven't,
tried
the
type
script
itself,
but
typescript
eventually
does
not
do
javascript
right.
Nickel.
C
A
So
if,
if
you
are
able
to
compile
it
into
js,
the
node.js
environment
should
itself
work.
In
fact
right
now
there
is
an
example
that
we
built
last
to
last
week
that
uses
next
js.
A
Non-Distance
yeah
and
next
js,
you
know
like
dynamic
rounding
and
you
know,
multiple
support
file
support,
also
works
with
phishing.
Now
there
has
been
one
user
who
has
asked
us
for
typescript
a
while
back,
but
beyond
looked
at
it.
Maybe
if
you
had
an
issue
there
is
already
one
issue.
We
can
try
to
look
at
it
and
you
know
add
support
for
that
script
as
well.