►
From YouTube: Knative demo: Building Custom Event Sources for Knative
Description
Check out this demo on "Building Custom Event Sources for Knative ” - Murugappan Chetty, Principal Engineer at Optum
A
B
B
Okay,
so
first
I
want
to
say:
what's
our
association
with
k
native
k
native,
we
run
it
in
our
premises
at
optum
and
we
support
some
serverless
workloads.
This
is
our
setup,
so
you
can
find
the
usual
suspects
we
have
serving
eventing
and
for
build
for
tecton
and
prometheus
istio
and
all
those
stuff.
B
The
only
additional
thing
that
we
have
is
a
set
of
apis
that
we
have,
on
top
of
k,
native
and
kubernetes,
just
to
make
the
barrier
of
entry
really
simple
for
the
new
users
who
are
not
familiar
with
k,
native
and
kubernetes,
and
we
find
a
lot
of
users
take
advantage
of
that.
Okay
and
we
are
currently
in
production,
running
about
like
more
than
500
services.
B
B
Since
I'm
going
to
talk
about
source
today,
a
kafka
source
with
kafka,
binding
and
followed
by
pink
source
and
sync
bindings.
We
also
have
some
custom
resources
and
some
domain
specific
resources
which
I
cannot
share,
but
I'm
going
to
share
some
of
the
custom
resources
that
we
built
along
with
helping
run
this
cluster.
I
also
do
some
contribution
to
k
native
and
regularly
attend
the
client
working
group
meeting
and
the
hacky
hour
on
friday.
B
That's
where
I
get
the
information
about
kenya,
okay,
so
today's
demo,
I'm
going
to
give
it
in
my
raspberry
pi
cluster
and
all
the
build
just
got
to
see
the
new
core
multi-arch
support
and
makes
things
really
simple.
And
yes,
I
could
get
the
k
native
con
knit
and
contour
nightly
deployments.
B
The
only
change
that
I
had
to
do
was
like,
as
matt
mentioned,
change
the
online
to
116
from
115
one
okay,
so
that
is
the
date
that
I
have
so
whatever
I'm
going
to
present
today,
I
wrote
it
as
a
blog,
so
it's
kind
of
easy
to
share
than
a
deck.
So,
let's
check.
So
what
is
the
k
native
eventing
source
according
to
the
native
documentation?
It's
a
link
between
the
producer
and
the
sync
even
producer
can
be
anything
like
a
kafka
topic
or
a
redis
queue
or
a
github.
B
Thing
and
then
the
sync
is
the
one
that's
addressable
resource
within
kubernetes
like
it
can
be
a
canadian
service
or
kubernetes
service,
anything
which
could
be
resolved
to
a
uri
and
to
find
the
list
of
addressables
there's
a
discovery
apa
in
the
k
native
sandbox.
That
could
that
you
could
use
to
find
out
that
reciprocals
and-
and
you
can
also
here-
you
can
see
the
list
of
k
native
sources
that
have
been
already
supported
by
k
native
here.
B
That's
in
the
there's,
a
creative
dev
documentation,
so
building
a
source
boils
down
to
building
the
link
which
is
called
adapter
within
canada
and
how
you
build
the
adapter
and
how
you
ship
it,
how
you
distribute
it
with
your
teams.
So
that
is
what
is
like.
Building
the
even
source
and
the
main
job
of
the
adapter
is
to
reach
out
to
the
producer,
get
the
events
constructed
as
a
cloud
event
and
send
it
to
a
sink
white
cloud
events.
There
are
like
a
lot
of
advantages.
B
B
Second,
is
a
sync
binding
approach
and
third
is
a
container
source
approach.
Controller
approaches
the
approach
mostly
used
by
all
the
in-house
k
native
eventing
sources,
and
it
it
is
a
it
is
a
kubernetes
controller,
and
these
are
the
components
that
you
will
be
building
on
an
adapter,
a
controller
and
a
web
book.
B
Adapter
is
the
one
that's
going
to
produce.
The
events
and
controller
is
one
that's
going
to
operate.
The
adapter
web
book
is
optional.
Suppose
if
you
want
to
do
some
validation
or
some
defaulting,
you
would
need
the
web
book.
Otherwise
you
don't
need
to
use
so
building
and
controller
can
be
a
cumbersome
process,
but
fortunately
there
is
a
template
project
created
by
k-native
called
the
k-native
sandbox
and
there's
a
canadian
sorry
here,
native
sample
source
in
the
canada
sandbox.
You
can
just
take
this
template
project
and
get
started
from
here.
B
It
comes
with
most
of
the
boilerplate
code.
It
also
comes
with
the
github
workflows
and
things
like
that.
There's
also
a
sample
controller
that
could
also
be
used
so
using
this
approach.
First,
I
built
even
source
called
as
a
gql
source
or
a
graphql
source.
B
What
it
does
is
like
graphql,
as
we
know,
graphql
server,
as
you
know,
supports
three
operations
querying
mutation
and
subscription
using
the
subscription
option
you
can
subscribe
to
a
graphql
server
endpoint
and
whenever
there's
a
change
in
that
particular
entity,
the
server
sends
out
that
information
to
the
whoever
subscribes
to
it.
So
the
source
will
subscribe
to
the
graphql
server
and
get
the
changes,
and
whenever
there's
a
change,
it
will
push
it
down
to
the
sync
using
that
information
syncs
can
do
some
action.
B
I'll
also
show
an
example
of
like
how
this
can
be
used
so
before
that,
how
I
built
this
controller
using
the
sample
source
controller,
just
give
some
steps.
I
cannot
go
into
the
details,
of
course,
but
first
step
always
in
the
apa
section.
After
defining
the
version,
go
and
define
the
types
that
you
want
and
then,
along
with
that,
there
are
some
information
about
the
license
elements
and
all
these
things
you
can
create.
So
once
you're
done
with
that,
you
just
need
to
change.
B
You
know
change
the
boilerplate,
that's
already
given
next
step
is
to
generate
the
code,
so
this
part
generates
most
of
the
code.
For
you,
there
is
a
kubernetes
code
gen
and
a
k
native
code
gen,
which
is
going
to
be
used
by
the
reconcilers
and
the
client.
This
is
the
client
package
which
is
going
to
have
all
these
information
that
was
generated
by
this
code.
B
Okay,
so
once
the
code
gen
is
completed,
the
next
thing
is
to
write
the
reconcilers,
and
for
this
also
there's
a
controller,
you
just
need
to
confirm
to
the
k,
natives
controller
interface.
So
once
you
confirm
to
the
k,
natives
controller
interface
and
write
the
controller
and
the
b
concealer
logic,
the
advantage
that
you
get
this,
you
can
make
use
of
the
shared
link.
So
this
is
the
entry
point,
and
this
is
all
the
code
that
you
have.
Everything
is
like
injected
with
a
shared
mine
and
there's
a
lot
of
work
going
on
there.
B
So
you
can
go
to
the
k
native
package,
github
repository
and
you
can
see
what
shared
bin
does
like
getting.
The
kubernetes
clients,
config
map
trading
and
then
logging
construction,
this
environmental
variables,
reading
all
these
things
are
done
by
the
shared
mine
and
on
a
side
note
or
the
k
native
package.
B
It's
like
a
gold
mine.
There
is
lot
of
libraries
there
suppose
we
are
building
something
for
k.
Native
and
kubernetes
always
check
out
this
libraries
in
the
k
native
package.
There
are
a
lot
of
supporting
functions
there.
Okay,
so
that's
about
the
controller
and
in
the
controller
you
can
see
you
all
you're
doing
is
like
you
can
re,
you're,
reconsigning
and
then
building
a
receiver
adapter
and
the
receiver.
Adapter
is
nothing
but
just
a
docker
image.
So
technically
it
can
be
written
in
any
language.
B
You
just
need
the
controller
just
needs
a
darker
image,
but
if
you
have
a
I
mean,
if
you
write
in
go,
you
can
take
advantage
of
the
libraries
that
k
native
gives.
Okay.
So
that's
about
the
controller.
Our
next
step
that
we
do
is
to
build
that
up
similar
to
the
controller
interface,
there's
an
adapter
interface
that
they
have,
and
once
you
confirm
to
the
adapter
interface,
all
you
do
is
have
the
start
method
and
then
write
a
logic
from
here.
B
You
can
make
use
of
the
adapter
main,
so
this
adapter
main,
just
like
the
shared
main,
has
a
lot
of
code
being
done
for
you.
You
just
need
to
follow
that
interface
and
then
supply
that
adapter
to
that
up
to
me.
So
this
builds
the
controller
and
adapter.
I
have
not
built
a
web
book
in
this
process.
I
didn't
need
to
so
once
you're
done
with
these
things.
The
next
thing
is
the
easier
thing
like
you
go
inside
the
config
folder.
There's
a
list
of
yarn
over
fails.
B
You
have
to
change
it
according
to
your
needs,
and
then
you
define
the
shape
of
your
source
in
the
g
in
the
custom
resource
definition,
and
once
it's
done,
you
can
just
deploy
this
custom
source
in
your
cluster,
and
I
have
already
deployed
it
just
to
save
time
and
I'm
going
to
show
an
example
of
like
how
to
use
this
thing.
B
B
We
want
to
tweet
out
these
changes
so
to
do
this,
I
created
a
graphql
source,
so
this
graphql
sources
will
subscribe
to
on
two
queries,
so
one
is
on
the
menu
items
and
second
one
is
on
the
info
items,
and
this
is
the
graphical
endpoint
it's
going
to
listen
to
and
then,
whenever
there's
a
info
whenever
there's
an
even
that
it
needs
to
send,
it
will
be
sending
to
this
k
native
service.
That's
our
sync!
Okay!
B
So
I
have
all
these
things
deployed
and
I
also
have
some
bought
min
scale
to
one
so
that
I
want
to
save
time.
Okay,
so
this
is
the
application.
B
So
currently,
these
are
the
menus
that
are
available
so
when
a
manager
comes
in
and
then,
if
you
want
to
maybe
add
an
add,
an
information
right,
like
suppose,
I
want
to
add
ice
cream
or.
B
To
the
source
and
the
source
would
have
sent
it
to
the
sm
part,
so
you
would
get
a
tweet
on
this
okay
so
and
similarly,
if
I
go
and
remove
some
information,
obviously
again
that
is
also
going
to
come
up
show
up
right
here.
A
B
That
quick,
okay
and
if
I
have
to
show
it
in
the
back
end.
So
I
have
this
k
native
source
deployed
and
I
also
have
the.
B
Restaurant,
this
is
the
restaurant
application
and
this
is
the
graphql
source
that
was
deployed
by
the
server,
and
I
also
have
the
k
native
service.
That's
going
to
process
the
information
and
tweet
it
out.
Okay.
So
this
is
about
the
source
that
I
just
built.
Using
the
controller
approach
and.
B
In
most,
in
some
cases,
you
wouldn't
need
the
controller
approach.
In
some
cases
you
would
want
the
k
native
service
itself
to
send
out
an
event
or
kubernetes
job
itself
to
send
an
event.
In
those
cases
you
don't
need
a
controller.
So
that's
when
you
get
the
sync
binding
you
can
make
use
of
this
inventory.
Sync
binding
is
a
custom
resource
again
provided
by
kennedy.
Even
thing.
It's
managed
by
the
knight
of
event
controllers
and
all
it
does
is
like.
B
Whenever
you
create
a
sync
binding
custom
resource,
it
injects
two
environmental
variables
into
the
desired
part.
Speckable
knit
kubernetes
resource
a
part.
Speckable
kubernetes
resource
is
one
which
has
spot
spec
in
its
definition
like
a
kubernetes
job
or
a
deployment
or
k
native
service,
and
the
injection
is
based
on
the
labels
and
its
namespace
scope.
B
If
I'm
not
okay
and
you
can
make
use
of
the
k
native
libraries
to
to
fetch
the
casing
information
and
create
a
cloud
events
client.
So,
from
from
a
developer's
standpoint,
all
you
need
to
do
is
like
build
your
events
in
cloud,
even
format,
and
then
you
can
make
use
of
the
client
that
is
injected
by
native
and
then
send
the
information
out.
So
using
this
approach
build
a
source
called
as
s3
source.
B
Okay,
what
this
the
goal
of
this
source
is
to
reach
out
to
s3
object,
store
or
any
s3
compatible
store
like
mineo
or
aws
s3.
B
It
can
reach
out
to
the
any
bucket
and
get
the
file
that
was
specified
and
it
will
send
each
line
or
multiple
lines
as
a
cloud
event
through
the
sync
okay.
This
will
be
helpful
for
enterprises,
which
process
lot
of
flat
files
and
in
a
flat
file
each
line.
Each
position
and
a
line
has
a
meaning,
and
so
all
they
need
is
to
process
single
lines.
B
So
this
source
would
take
the
job
of
like
going
to
the
bucket
and
fetching
the
file
and
then
taking
the
data
and
sending
it
to
the
sync
okay.
So
here
the
main
component
is
the
kubernetes
job
and
the
sync
is
getting
injected
into
the
kubernetes
job
and
that's
what
is
like.
Sending
the
data
out,
we
chose
the
kubernetes
job
because
these
files
can
be
really
big
like
it
can
be
like
5gb
10gb,
or
something
like
that.
B
So
we
don't
want
the
canadian
service
to
be
running
that
long,
but
at
the
same
time
the
kubernetes
job
is
like
a
one
time.
You
cannot
reuse
it
right.
So
that's
where
we
we
had
a
container
image
that
could
be
run
as
k
native
service
will
create
these
jobs
on
the
flight.
So
based
on
the
bucket
information
and
the
file
information,
it
will
create
a
job.
It
will
have
some
static
information.
B
Also,
like
the
connection
information
to
the
s3
bucket
and
using
these
information,
the
k
native
service
creates
a
kubernetes
job
and
the
kubernetes
jobs
will
send
the
information
to
the
scene.
So
the
knight
service
has
two
end
points.
One
is
to
create
the
kubernetes
job.
B
Second,
one
is
to
check
the
status
of
the
code
register
so
once
it
does
this,
the
k
native
service
will
scale
to
zero
and
kubernetes
job
will
also
complete
and
run
to
completion,
okay,
so
an
example
of
like
how
you
would
specify
how
you
specify
is
like
this
is
a
canadian
service
definition.
Nothing
special,
and
this
is
the
image
you'll
be
using
the
static
image
you'll
be
using,
and
this
is
the
kubernetes
job
image
which
will
be
passed
as
an
environmental
variable
called
job
spec.
B
This
is
what
this
scandal
service
expects
it
to
come
from
and
all
the
connection
information
will
be
provided
as
a
secret,
and
it
can
take
all
these
configuration
items
either
as
a
environmental
variable
or
as
a
query
parameter
to
the
native
service,
but
all
the
secret
information,
all
the
connection
information
you
don't
want
to
send
it
as
a
query
parameter.
Maybe
you
create
it
as
a
secret
and
mount
it
as
a
environmental
variable,
mainly
the
bucket
information
and
the
file
information
is
the
one
that's
going
to
keep
on
changing
that.
A
B
Passed
as
a
query
parameter
okay,
so
where
does
the
sync
binding
comes
here
to
explain
that
I'll?
Take
an
example
in
this
example?
What
we're
trying
to
do
here
is
like
process
a
drug
file.
Okay.
So
this
drug
file
is
a
csv
file.
It
has
a
list
of
drugs
based
on
anatomy
and
have
two
k
native
services
to
process
it.
One
service
will
split
these
file
based
on
anatomy
and
upload
it
back
into
the
s3
bucket,
and
second
one
is
just
a
cloud
events
viewer.
B
It
will
show
it
on
the
screen
so
based
on
based
on
the
sync
binding
labels,
we
can
see
how
we
can
send
the
data
to
two
separate
kinetic
senses.
That's
what
I
want
to
show
here
so
before
that
to
create
so
what
it
takes
to
set
up
this
s3
sources.
This
is
a
one-time
process
in
each
namespace
per
s3
connection
information.
B
So
you
create
all
the
s3
related
information
seeker
secrets.
Next
thing
is
to
you
need
to
create
a
role
binding
so
that
the
k
native
service
can
create
the
kubernetes
job.
Third,
one
is:
you
need
to
have
k
native
event
in
core
for
the
sync
binding
to
be
injected,
and
finally,
you
deploy
this
s3
source.
This
is
on
time
process.
You
have
the
k
native
service
now
up
and
running.
Okay.
B
Now,
the
next
step
is
to
run,
I
mean,
create
the
k
native
services
which
are
going
to
process
this
drug
information
okay.
So
this
is
the
first
canadian
service
which
is
just
a
drug
processor.
B
What
I,
as
I
told
earlier,
it's
going
to
just
split
the
file
based
on
anatomy
and
upload
it
back
into
an
s3
bucket
and,
as
you
can
see
it's
it,
it
will
inject
into
that
job,
which
has
this
label
drug
file
source,
okay,
and
it
will
be
sending
to
this
sync.
Okay
and
second
file
that
I
have
is
like
to
send
a
cvs
cloud
events
viewer,
it's
just
an
ui
and
the
match
label
for
this.
One
is
like
view
drugs
and
source
okay.
B
B
First,
before
that,
I
just
want
to
show
I'm
not
going
to
how
to
run
k
n
with
co-publish
everything
in
a
single
line.
So
I
don't
know
I'm
not
going
to
wait
for
it
to
complete,
because
this
will
be
running
in
the
default
namespace.
Okay,
so
you
can
see,
there's
a
co-published
platform
sequel
to
all
or
you
can
have
a
amd
64
or
linux,
slash
arm64,
whatever
you
want.
B
B
And
now
this
is
creating
the
kinetic
service,
so
I
don't
want
to
wait
for
this.
To
do
this
will
work
okay,
so,
okay,
first
one
is
I'm
going
to
this.
This
is
my
endpoint
to
call
the
s3
file
source
and
in
the
query
parameters
I
passed
this
label
direct
file
source
and
whatever
is
the
bucket
information
and
whatever
is
the
file
information.
B
Okay
and
currently
you
can
see
here
the
k
native
the
s3
file,
so
service
is
not
running
okay,
so
now,
when
I
run
it,
it's
going
to
start,
and
once
this
is
done,
it's
running
the
job.
Now,
okay
and
it's
calling
the
drug
processor,
can
it
says
okay,
so
simultaneously
I'll
run
the
view
one
also
so
there's
a
this
is
view
direct
file,
source,
okay,.
B
B
Okay,
now
this
is
the
cloud
events
viewer.
So
now
all
the
events
that
are
there
got
sent
to
the
ui
okay.
B
So
here
you
can
see
the
advantage
of
sync
binding
there.
So
with
you
don't
need
to
change
any
of
the
k
native
deployments.
You
don't
need
to
change
the
destination
in
order
to
change
the
source.
Just
by
changing
the
labels,
you
can
send
it
to
like
two
different
destinations:
okay,
okay,
so
with
the
first
option
and
second
option,
you
should
be
able
to
do
most
of
the
things,
but
the
main
disadvantage
that
I
would
see
with
sync
binding
is
like
I
mean
that's
the
way
it
is.
B
So
this
is
the
way
that
you
distribute
this
even
source
to
other
people
like
this
is
a
canadian
service
and
it's
like
people
can
fat
finger
and
make
some
mistakes
here.
It's
not
like,
there's
no
crd,
to
do
a
validation
or
something
like
that,
but
at
the
same
time,
lot
of
people
don't
want
to
create
a
controller
because
they
don't
have
the
expertise
to
do
that,
but
still
if
they
want
to
have
a
controller
who's
which
is
managed
by
somebody
else.
Then
that's
when
you
choose
choose
the
contain
resource
option.
B
Okay,
so
the
only
job
that
you
need
to
do
is
to
create
the
adapter,
and
you
expect
the
container
source
to
do
the
work
for
you.
So
for
this
I'm
going
to
use
the
ftp
source
by
relay
okay.
So
this
is
the
container
source,
and
this
is
the
only
image.
This
is
the
adapter
image
that
it
needs,
and
this
is
the
only
responsibility
of
the
user.
Everything
else
is
managed
by
the
creative
eventing
controllers.
B
You
pass
all
the
arcs
that
you
need
to
process
and
all
these
environmental
variables
on
the
augs
will
be
created
as
a
deployment
and
whatever
sync
information
that
you
create
will
be
created
as
a
sync
binding.
Okay,
so
it's
same
as
the
sync
bending
only,
but
the
canadian
eventing
controller
is
going
to
create
the
deployment
and
sync
binding
for
you.
Okay,
so
he
I
don't
want
to
go,
show
it
in
a
terminal.
B
There
is
a
ui
called
graph
by
scott,
so
you
can
use
this
to
display
all
the
k
native
services
and
sources
within
your
namespace,
and
it
also
shows
the
connection
between
the
source
and
the
destination
right.
So
this
was
the
advantage.
I
was
saying
about
cloud
events
you
can
see
like
three
connections.
Are
there
three
sources
sending
information
here
and
I
have
a
single
consumer
to
consume
it,
so
that
is
made
possible
by
cloud
events
and,
as
I
was
saying,
it
creates
a
deployment
and
sync
binding
for
the
container
source.
B
Here
you
can
see
there
is
a
container
source
also
pointing
to
the
destination
and
also
a
single
binding
for
the
sftp
watcher
pointing
to
the
sync
binding
okay.
So
what
the
sftp
watcher
does
is
like
it's
going
to
watch
for
a
ftp
location
and
it
will
be
polling
there
whenever
there's
a
new
file,
it's
going
to
send
the
information
to
the
sync.
So
I
have
an
ftp
server
within.
There
has
been
I'm
going
to
just
upload
an
information
using
the
cloudberry.
B
It'll
take
some
time
for
the
yeah
here
you
can
see
the
information,
so
whenever
the
file
was
added,
you
can
see
the
information
here,
so
it
keeps
track
of
the
latest
time
and
whenever
there's
a
file
after
this
timestamp
it
keeps
sending
to
the
canadian
sync
and
this
file.
This
time
is
stored
in
a
config
map.
So
you
don't
lose
it
when
there
is
a
crash
or
something
like
that.
B
Okay,
so
these
are
the
three
options
that
we
have
to
build
the
source
and,
as
I
told
earlier,
I
explained
the
concept
of
the
source,
but
apart
from
this
option,
if
you
have
a
different
pattern,
you
can
use
the
attachment
and
then
compare
these
three
options
might
be
debatable.
So
I
don't
want
to
discuss
here
so
you
can
go
through
the
I'll
share
this
link
and
if
you
have
any
comments,
please
let
me
yeah.
That's
all
I
have
thanks
for
this
opportunity.