►
From YouTube: OCB: Consuming AWS events the Kubernetes way for event-driven applications (TriggerMesh)
Description
In this show we will demonstrate how you can use the TriggerMesh operator for AWS event sources and deploy AWS services event consumers. For example we will show how to consume messages from SQS and Kinesis and send them to Kafka, all running in OpenShift. The TriggerMesh operator available as an OpenShift operator gives you a declarative API to deploy event consumers for Kinesis, SQS, CodeCommit, DynamoDB and Cognito as such to opens the door for multi-cloud event-driven applications.
A
A
Today
we
are
lucky
enough
to
have
with
us
our
friends
from
trigger
mesh,
and
we
have
sebastian
golaskin
the
co-founder
and
chief
product
officer
of
trigger
mesh
and
then
joining
us
from
india.
Today
we
have
samir
nike,
the
senior
software
engineer,
who's
going
to
be
showing
us
some
of
the
capabilities
in
in
a
demonstration
of
the
trigger
method.
Software
sebastian.
How
are
you
today
doing
great?
Thank
you
very
much.
How
are
you
I'm
doing
really?
Well,
thanks
for
thanks
for
joining
us,
we're
happy
to
happy
to
have
you
here.
A
B
A
A
red
hat
operator
for
open
shift
so
that
you
know
customers
when
they
you
know
are
using
our
container
orchestration
platform,
know
that
they're
gonna
have
the
best
two
best
day.
Two
supportability
you
know
for
production
environments
so
trigger
mesh.
Has
a
you
know:
we've
been
working
with
you
folks
for
quite
a
while.
A
You
have
a
red
hat
certified
operator,
it's
available
in
our
registry,
it's
available
in
our
marketplace,
the
red
hat
marketplace,
but
tell
me
a
little
bit
about
trigger
mesh
you're
you're,
one
of
the
founders
of
of
the
of
the
company.
Tell
us
tell
us
a
little
bit
about
how
that
came
about.
You
know,
what's
in
the
name,
trigger
mesh
and
and
so
forth,
yeah
sure.
B
C
Yeah
yeah
no
problem,
you
know
trigger
match,
so
we
we
got
started
when
google
announced
k
native
canadian
project
so
that
was
july
2018
they
announced
canadia.
C
I
had
created
cubeless
one
of
the
first
fazz
function
as
a
service
solution
on
top
of
kubernetes
and
and
when,
when
google
went
ahead
and
announced,
k
native,
so
red
hat
was
on
board
and
then
ibm
you
know
was
was
on
board,
even
though
they
you
know.
Of
course,
you
know
you
and
ibm
were
working
on
openwhisk,
but
then
everybody
aligned
on
k
native.
C
Ultimately,
you
know,
pivotal
vmware
also
also
joined
right.
So
you
know
it
looked
like
everybody
was
going.
All
the
vendors
at
least
were
going
to
join
forces
and
and
work
on
k
native.
So
you
know
I
decided
to
to
launch
a
new
startup
trigger
mesh
to
to
work
in
the
serverless.
You
know
serverless
ecosystem
on
top
of
k
native
and
our
fun.
Our
our
you
know,
vision
was
not
just
function,
but
it
was.
You
know
especially
event
orchestration,
because
we
think
that
eventing
is
the
big.
C
You
know
the
big
component
of
serverless
as
you're,
trying
to
to
build
event
driven
applications,
so
we
created
trigger
mesh
very
quickly
in
in
july
2018,
with
my
my
co-founder
mark
hinkle
call.
We
help
gitlab,
develop
gitlab
serverless,
so
gitlab
serverless
with
some
some
work.
We
did
for
them
very
early
on
and
since
then
you
know
we
we
kept
on
on
developing.
You
know
product
around
k
native,
you
know
and
understanding
the
use
cases.
You
know
talking
to
lots
of
folks
doing
a
lot
of
market
discovery
and
then
finally,
you
know
at
some
point.
C
We
thought
we
were
ready.
So
last
december
december
2019
we
we
raised
a
seed
round
three
million
seed
round
from
index
and
crane
and
early
2020.
We
were
super
excited.
We
decided
to
start
building
the
team.
That's
when
you
know
ultimately,
samir
tamir
joined
we're
fully
remote
fully
distributed
team.
So
you
know
we
got
started.
First
engineering
higher,
you
know
march
first
and
then
end
of
march
covid
right
or
I
mean
it
started
before
right.
But
so
it's
been.
It's
been
an
interesting
year.
You
know,
but
thankfully
we
are.
A
Yeah,
I
was
going
to
say
you
know
you're
you're
fully
distributed
remote.
We
at
red
hat
had
a
large
component
of
the
company
remote,
but
you
know
certainly
lots
and
lots
of
us
were
working
in
offices
and
it's
been.
It's
been
pretty
interesting
for
the
last
nine
or
ten
months,
everyone.
You
know.
I
think
we
have
380
000
employees
now
in
the
in
the
company,
in
the
combined
company
and
everybody's
working
from
home,
and
I
thought
it
was
going
to
make
less
work.
A
I
thought
it
was
going
to
be
a
little
less
busy
and
it
turns
out
it's
you
know.
Things
are
busier
than
ever
which
which
I
suppose
is
a
good
problem
to
have,
but
but
you
folks
have
been
really
really
busy
you've
had.
You
know:
you've
had
a
pretty
exciting
year
in
in
this
year,
you
guys
got
named
by
crn
one
of
the
top
10
coolest
startups
of
2020..
C
Yeah
yeah,
so
you
know
it's
been
an
exciting
year.
You
know,
because
we
were
building
the
company.
I
mean
you
know.
Whatever
the
you
know,
the
the
world
environment
with
the
with
the
pandemic.
You
know
it
was
exciting,
nonetheless
to
to
to
build
a
company.
So
first
you
put
a
team
together
right,
so
you
know
guys
like
samir
in
india.
We
have.
You
know
antoine
in
germany
and
spain,
and
you
know
the
us
and
I'm
in
switzerland.
C
So
you
know
putting
all
those
folks
together,
building
a
team
so
that
we
can
start
building
a
product.
That's
been
that's
been
very,
very
exciting
and
and
then
you
know
really
as
as
startup
people
trying
to
find
the
real
product
market
fit
it's
it's.
Actually,
it's
actually
quite
fun
because
you
have
a
vision,
you
know
which
is
you
know,
developers
are
going
to
to
adopt
or
and
build
event
driven
applications
using
serverless
functions
and
then
cloud
services.
C
Increasingly,
so
that's
that's
our
vision,
but
then
you
need
to
find
you
need
to
find
the
sweet
spot.
So
it's
been,
it's
been
really
really
good
and
then
you
know
coolest
coolest
startup.
It's
because
we're
working
with
cutting
edge
technology,
so
on
top
of
kubernetes,
on
top
of
you
know,
openshift,
you
know
we
push
the
we
push
k
native,
we're
number
six
contributor
in
canadian
behind
you
know
all
the
big
dogs
right.
C
You
know
you
vmware,
google,
all
right,
so
we're
we're
a
small
startup,
but
we
still
managed
to
to
you
know
to
to
get
news
that
to
get
news
out
there
to
get,
you
know
to
get
contribution
in
the
ecosystem.
So
here
you
know
on
the
open
shift
on
the
openshift
marketplace.
We
put
some
aws
event
sources
right,
so
you're
trying
to
build
an
event
driven
application
using
events
from
aws.
C
But
then
you
want
to
trigger
workload
on
your
openshift
cluster.
You
know
you
have
an
open,
an
open
source
project
from
us
and
then
it's
also
available
in
the
marketplace.
So
we
do.
We
do
lots
of
things
and
we
make
sure
that
you
know
people
know
about
them
and
then
and
then
we
being
small
we
we
also.
You
know
that
that
allows
us
to
go
quite
fast.
A
Now
you
you
use
the
word
trigger
there
for
triggering
is
that
you
know
trigger
mesh.
Is
that
is
that
sort
of
you
know?
What's
in
the
name.
C
Yeah
yeah
yeah
totally,
so
the
vision
was
definitely
building
serverless
application
that
we
we
saw
as
a
mesh
of
serverless
functions,
cloud
services
on
aws,
on
google
cloud
on
azure
cloud
and
then
on-premises
workload
running
in
kubernetes
or
openshift
right.
So
we
saw
this
hybrid.
C
You
know
world
right,
so
this
composition,
this
mesh
of
you,
know
workload
and
services,
that's
what
we
we
saw
and
and
then
we're
like.
You
know
to
be
able
to
start
the
execution
of
those
workloads
you're
going
to
need
to
events
to
flow
through
the
system,
and
you
know
certain
you
know:
state
changes
are
going
to
trigger
those
workloads.
C
So
that's
how
we
came
up
with
trigger
mesh,
which
I
think
is
great,
but
for
some
people
it's
also
a
little
bit
confusing
because
they
think
we
are
service.
Mesh
company,
we're
not
a
service
mesh
company.
Sure
now.
A
C
Yeah
so
that
there's
a
meme
actually
on
on
google,
it's
exactly
what
you
just
said.
You
know,
serverless
is
just
somebody
else's
servers.
I
just
I
just
gave
a
talk
at
kubecon.
It's
called
serverless
or
service
full
serverless.
I
think
it's
a
little
bit
of
an
unfortunate
word
because
everybody
says:
oh,
you
know
there
are
no
servers.
Well,
of
course,
they're
servers
right.
You
know
you
have
a
process
running,
it
needs
to
run
somewhere.
So,
yes,
it's
basically
managed
services
and
then
really
you
know
the
the
philosophy
behind
it
is
really.
C
You
have
more
and
more
services
in
your.
You
know
in
your
apps,
in
your
enterprise,
you
have
aws,
you
have
servicenow
salesforce
elasticsearch
twilio
on
premises.
You
know
services,
so
you
it's
just
full
of
services
and
you're,
trying
to
make
sense
of
this
you're
trying
to
compose
them
you're
going
to
you're
trying
to
trigger
them.
You
know
when
something
happens
right
so
service
full
would
be
much
better.
I
think
certain.
C
C
A
A
You
know,
moving
from
the
data
center
into
a
you
know,
public
cloud
and
to
a
multi-cloud
environment
is
is
great.
You
know,
openshift
allows
people
to
to
build
once
and
deploy
anywhere,
but
but
it's
not
all
perfect
right.
I
mean
there's
this.
This
new
flexibility
brings
all
kinds
of
new
challenges
for
customers
trying
to
run
those
production
workloads
in
the
multi-cloud
environment.
How
are
you
folks
helping
address
those
issues
for,
for
you
know,
for
for
customers.
C
Yeah,
so
you
know
we
got,
we
got
the
seed
in
december,
we
started
hiring
in
march,
so
we
are
nine
months
in
right.
So
you
know
I
don't
have
a
a
huge
treasure
of
war
stories
right.
You
know,
but
the
the
the
the
pocs
that
we're
that
we're
doing
with
you
know
pretty
especially
a
lot
of
financial,
a
lot
of
banks.
You
know,
come
to
us
inbound
and
they
have
you
know
they
have
issues
mostly.
What
what
came
up
is
that
they
have
integration
issues.
C
So
when
I
say
composing
services,
it
translates
into
integration
issues
and
then
so
example,
you
know
we
work
with
a
bank,
you
know
they.
They
started
doing
lots
of
stuff
with
salesforce.
So
all
of
the
customers
are
in
salesforce
every
time
they
do
a
change,
it's
in
salesforce,
it's
their
database
and
they
need
their
backend
infrastructure
to
basically
keep
in
sync
with.
What's
in
what's
in
salesforce,
so
you
know
big
challenge,
you
know
linking
linking
the
two
into
integration
right.
So
it's
not.
C
You
know,
that's
the
type
of
integration
that
we're
seeing
and
we're
seeing
you
know
more
and
more
so
you
know
azure
healthcare.
Api
that
somebody
wants
to
use,
but
then
they
have
lots
of
things
on
aws
right,
so
they
need
to
to
link
azure
healthcare
to
sns
or
you
know,
eventbridge.
You
know
things
like
this.
Overall,
you
know
the
the
the
big
challenge
that
all
the
startups
have.
You
know
I
feel
is
that
we're
all
very
cutting
edge
and
they're
mul
they're
multiple
speeds
right,
the
the
speed
of
a
startup.
C
That's
you
know
cutting
edge
with
products
like
k,
native
or
argo
events
or
so
on,
and
then
the
the
pace
of
an
enterprise.
You
know
it's
totally,
it's
totally
different
in
the
you
know,
innovation
curve
right.
So
you
know
you
still
see
a
lot
of
companies
that
are
trying
to
put
ci
cd
in
place.
For
example,
you
know
you
know
we
need
ci
cd.
We
need
to
speed
up
deployment
to
openshift,
so
it's
one
of
our
one
of
our
big
pocs.
You
know
it's
been
one
of
their
biggest
driver.
C
How
do
we
speed
up
deployment
to
openshift
from
30
days
to
one
hour
right,
so
they
put
lots
of
things
in
place,
but
then
you
know
there
are
big
questions.
Like
security,
you
know
security,
evaluation,
risk
assessment,
things
like
this,
so
how
how
do
they
make
that
happen?
Which
is
a
basic
basic
problem?
So
it's
all
you
know
it
all
comes
back
down
to
automation.
How
can
you,
how
can
I
put
more
automation
in
my
you
know
in
my
infrastructure,
in
my
pipelines?
Right?
C
A
C
C
C
What's
the
you
know
what
what
is
what
is
going
to
to
make
them
take
the
decision
and-
and
you
know
you
go
back
to
very
simple
things-
you
know,
how
can
I
speed
up
my
time
to
market
my
time
to
value
right,
which
you
know
that
that
one
example
is
very
telling,
because
you
know
it's
absolutely
true
and
it's
you
know
speeding
up
the
the
time
it
takes
to.
You
know
to
get
to
go,
live
yeah.
A
Oh
okay,
wait.
We
were
we
were
talking
when
we
were
we're
on
the
bridge
just
before
we
went
live
and
you
were
mentioning
that
there
was
that
there
was
something
that
you
wanted
to
to
show
here
before
we
before
we
get
on
to
samir.
I
know
samir
has
a
demo
that
he's
going
to
be
sharing
with
everyone,
but
I
think
you
said
that
there
was
something
that
you
wanted
to
show
before
we
get
to
samir.
C
Yeah,
can
we
do
this?
So
samir
has
a
cool
demo
on
openshift,
but
I
can
I
can
show
you.
I
can
show
you
some
basic
things
here:
okay,
cool,
so
I
logged
into
to
a
trigger
mesh.
We
have
a.
We
have
a
sas,
so
we
you
know
you
can
deploy
on.
You
can
deploy
on
any
kubernetes
cluster,
but
we
have
a.
We
have
a
sas
offering
so
here
I'm
I'm
logged
in
you
know.
We
have
anyway
lots
of
little
nice
things
in
the
ui.
C
This
is
what
we
call
our
bridge
catalog.
So
they
represent
integrations
basic
integration
between
you
know:
services
right,
you're,
trying
to
go
service
full.
So
you
go.
You
got
things
like.
You
know,
salesforce,
to
elasticsearch
slack
to
confluent
slack
to
google
sheet
right.
So
all
of
these
you
know
they're
in
the
in
a
catalog
and
you
say,
use
template
and
then
you
can.
You
can
fill
up
the
the
you
know
the
parameters
to
to
basically
configure
those
event
sources
and
the
events
we
call
them
target.
C
Some
people
call
them
event,
syncs
right,
so
event,
sources
to
event.
Sync!
So
when
you
do
this,
you
end
up.
Building
you
know
the
the
key
to
trigger
mesh,
which
is
bridges.
You
build
a
bridge,
be
a
bridge
between
two
services.
What's
interesting
because
we
rely
on
on
kubernetes.
Is
that
those
bridges?
C
You
know
they
have
a
declarative
api
right,
so
we
have
a
bunch
of
crds.
You
know
we
extend
the
communities
api
and
then
you
know
with
that
that
powerful
ui
you
you
can
create
those
manifest.
That
represents
those
bridges
right
and
if
we
try
to
do
a
you
know,
a
quick
bridge
here
I
want
to
do.
I
want
to
do
a
sqs
sqs
to
just
a
web,
app
that
that
shows
the
event.
C
So
here
I
see
the
the
sources
that
are
available,
so
I'm
going
to
take
create
an
aws
event
source
right.
You
know
red
hat
demo,
we're
gonna
go
through
a
broker,
a
message
broker,
and
here
I
need
the
rn
of
my
my
queue.
So
I
have
my
aws
console,
I'm
going
to
zoom
in
I
copy
the
the
arn
I
put
it
here.
So
this
is.
This
is
that's
all.
I
need
to
configure
an
aws
event.
C
A
C
An
aws
event
source
right.
We
we
have
a
few
sources
for
azure
and
and
google,
and
here
as
a
target,
I
put
a
basic
web
app.
So
when
I've
done
this,
I've
created
my
bridge
and
I
submit
right.
So
now
we
have
a
controller.
You
know
that's
running
in
your
communities,
cluster,
it's
going
to
see
the
objects
and
it's
going
to
create
them
and
of
course
I
messed
up
the
I
messed
up,
the
name.
You
see
that
I
have
a
full
manifest
for.
C
C
This
is
actually
a
serverless
call
that
a
function
but
serverless
application,
and
so
when
I
put
events
in
sqs,
I'm
going
to
consume
them
and
then
they
get
rooted
to
this
app
all
right.
So
I
go
in
sqs,
and
here
I
say
you
know:
hey
mike,
I
hi
mike.
I
put
the
message
in
sqs
and
then
you
know
boom.
I
get
it
right
away
all
right,
so
no
we
have.
C
We
have
chris
in
the
background
right,
hi
chris,
you
know,
I
said,
send
message
all
right
and
then
we
get
you
know
hi
chris,
so
that
you
know
the
it's.
It
looks
simple,
but
what
you
have
here
is
that
you
have
a
scalable,
sqs
consumer.
That's
been
automatically
deployed
in
your
kubernetes
cluster.
It
can.
It
can
scale
it's
good
kubernetes
deployment.
C
Every
time
there
is
a
a
message
that's
being
put
in
sqs,
we
get
that
the
source
emits
a
cloud
event.
We
follow
the
cncf
specification,
it
emits
a
cloud
event
over
http.
It
gets
to
a
broker
and
then
from
that
broker
it
gets
sent
to
the
target,
which
is
you
know
this
one
this
this
web
app
and
that
basic
web
app,
just
you
know,
shows
you
the
shows
you
the
the
json.
So
you
see
the
you
see
the
body
all
right.
C
You
have
you
have
to
ask:
what's
his
name,
scott,
who
is
now
at
who
is
now
at
vmware,
it's
because
it's
based
on
web
socket
so
suck.
Why
why
the
fish?
I'm
not
sure!
Why
do
you
know
samir.
A
Then
here
ahead
sebastian
when
you
say
any
any
kubernetes,
does
that
mean
you
know
any
community
edition
of
kubernetes
as
well
or
or
just
the
mainstream
commercial
ones
or.
C
Any
any
area
so
because
we,
you
know
we're
just
trigger
mesh,
gets
deployed.
The
entire
trigger
mesh
platform
is
really,
you
know,
a
set
of
crds
and
a
set
of
controllers,
the
the
aws
event
sources
specifically,
which
are
in
the
ibm
marketplace.
You
know
this
is
just
yeah.
This
is
just
one
one
controller
right,
one
I
mean
operator.
A
And
I
was
going
to
ask
you
about
your
operator,
I
know
you,
I
think
you
you
got
your
your
operator,
built
at
the
end
in
july,
in
summer
time
of
2019
with
us
or
something.
How
does?
How
does
the
operator
help
people
who
are
using
trigger
mesh
in
either
the
test
and
dev
or
in
a
production
environment.
C
C
You
need
to
have
an
operator,
that's
where
that's
where
the
story
gets
a
little
bit
funny,
and
you
know
at
some
point
we
should
you
know
we
should
discuss
this
right,
because
so
we
need
to
create
an
operator
to
install
our
controller
right
right,
and
you
know
the
first
one
I
mean.
How
did
we
do
this
quickly?
Well,
we
base
it
on
a
helm
chart.
I
was.
C
So
we
had
an
operator
to
install
the
name
chart
to
install
a
controller.
That's
it's,
you
know
it
doesn't
feel
it
doesn't
feel
very
optimal,
but
you
know
if
we
didn't
have
all
of
this
to
to
to
get
through
the
basically
the
operator
hub.
C
A
C
No,
no!
No,
so
we
have
what
I
showed.
You
is
the
sas
right.
So
if
you
want
to
use
it,
you
go
to
the
sas.
There
is
no
connection
to
on-prem
all
right.
If
you
want
to
what
I
just
showed
you.
If
you
want
that
on
premises,
then
you
know
you
have
to
talk
to
us
and
then
we
install
all
the
bits
in
your
kubernetes
cluster
or
something
else.
A
C
C
Well,
you
know
you
end
up
with
just
a
controller
right,
so
you
so
you
can.
You
can
configure
the
aws
event
sources
and
then
the
targets
it's
going
to
be
openshift,
serverless
right.
So
that's
you
know.
We
only
provide
you,
the
sources
they're
certified.
You
know
we
support
them,
but
then
the
targets
you
fall
back
on
openshift
workload,
openshift
serverless,
you
know
functions
right.
C
A
Good
smear,
I
think,
you're
on
deck
well,
and
you
know,
welcome
from
welcome
from
india.
I
I
thanks
for
staying
up
late,
I
think,
or
getting
staying
up
late
or
getting
up
early
one
of
the
two
right.
A
B
Yeah
so
hi,
my
name
is
sameer.
I
am
from
goa
like
mike
mentioned,
so
what
I
wanted
to
do
is
to
extend
on
sebastian's
demo.
I
I
want
to.
I
want
to
demonstrate
the
same
thing
on
the
openshift
platform
using
our
operator
that
was
mentioned
by
submission,
so
like
step
mentioned,
that
triggermesh
cloud
is
our
sas
offering,
but
we
do
package
this
bits
and
pieces
of
our
cloud
platform
and
make
it
available.
So
one
of
these
is
the
aws
event
sources
package.
B
So
using
that
package
you
could
do
the
same
sort
of
integration
by
hand
on
openshift.
So
what
I
want
to
demonstrate
is
an
application
use
case.
Where
say,
for
example,
you
have
an
application,
a
use
case
where
you
have
a
kinesis
data
stream,
and
what
your
use
case
is
that
you
want
to
pick
out
messages
from
the
kinesis
data
stream,
modify
them
as
your
application
logic
and
then
send
them
to
a
kafka
queue.
B
This
is
just
a
imaginary
use
case,
so
typically,
if
you're
an
application
developer,
what
you
would
go
ahead
and
do
is
write
a
whole
application
for
this
scenario:
do
the
integration
with
kinesis
and
then
do
the
integration
with
kafka
and
then
write
your
application
logic
right.
So
that
is
where
serverless
comes
into
picture,
where
you
don't
have
to
bother
about
the
integration
with
in
genesis
and
kafka.
You
just
work
on
your
application
logic
and
and
just
use
the
existing
components.
That
is
the
whole
idea
of
serverless
right
now.
A
Smear
can
I
just
jump
in
for
one
second,
we're
having
some
questions
come
in
in
the
blue
jeans
chat,
and
I
wanted
to
make
sure
that
we're
getting
these
the
question
addressed
and
then
and
then
answered,
because
we're
also
going
out
on
youtube,
live
and
facebook
live
so
sebastian.
I
don't
know
if
you
want
to
address
that
question
so
that
the
people
on
youtube
and
facebook
can
hear
the
question
and
the
answer
as
well.
C
Oh
yeah
sure
so
yeah
I
was
replying
to
the
question,
so
there
was
one
sorry
samir.
There
was
one
question:
yes
are
sources
and
sinks
implemented
as
kafka
connectors,
not
not
really,
because
you
don't
necessarily
use
kafka
as
the
messaging
substrate,
so
sources
and
sinks.
They
just
deal
with
cloud
events
that
are
being
sent
over
http
and
then
it's
the
broker
under
the
under
the
hood.
That
could
be
a
kafka
kafka
topic,
but
it
could
be
something
else
than
kafka.
C
It
could
be
kinesis
or
azure,
island
hub
or
google
pub,
sub
and
samia.
Why
don't
you?
Why?
Don't
you
jump
on
the
you
know
show
us
the
open
shift,
the
operator
where
we
see
the
you
know.
We
see
the
how
you
did
the
the
config
of
the
aws
event
sources.
C
A
And
if,
if,
if
chris
short,
picks
up
some
more
questions
from
youtube
and
facebook,
do
you
are
you
smear?
Are
you
okay
with
us
or
do
you
want
to
hold
the
questions
till
the
end
of
the
demo.
B
No,
I'm
okay,
pausing
and
for
the
questions
to
me.
Okay,.
A
I'll
try
and
I'll
try
and
barge
in
politely
if
they
come
in.
B
Yeah
in
the
demo
there
are
going
to
be
some
points
where
there
could
be
some
pauses.
So
at
that
point
it
could
be
a
good
segment
to
have
the
question
so
like
I
was
saying.
C
B
Consume
aws
if
events
occurring
on
the
aws
kinesis
screen,
so
some
messages
are
coming
to
your
king's
string,
say
you
want
to
process
them
as
an
application
logic
and
then
send
a
processed
message
to
a
kafka
queue.
So
the
use
case
can
be
anything,
but
this
is
just
an
example
to
give
you
an
idea
of
what
you
can
do
with
our
sources
component
right.
So
this
is
an
overall
view.
You
can
see
my
screen
right.
B
So
so,
with
that
application
in
mind,
I
will
just
quickly
go
over
the
setup
that
we
want
to
do
for
for
getting
our
application
running.
So
here
is
my
openshift
console.
B
C
B
So
I
just
completed
installing
the
serverless
operator,
and
this
I
also
instantiated
the
native
serving
and
the
native
ap
printing
apis.
So
this
is
standard
installation
process
process.
If
you
visit
the
readme
of
that
operator,
that
is
how
it
has
to
be
installed.
B
Second
thing
is
now
that
we
have
so
the
serverless
components
available.
We
can
go
ahead
and
install
the
aws
sources
operator
by
trigger
mesh,
so
we
have
two
versions
available.
B
It
is
the
project
is
actually
open
source,
but
the
one
from
the
marketplace
you
get
90
days
of
trial
before
you
can.
You
can
ask
for
support,
I
think
so.
You
just
go
and
install
the
one
from
the
catalog
again
just
accept
the
defaults,
and
this
is
the
operator
that
was
talking
about,
which
is
operator,
which
is
a
health
chart,
that
installs
the
controller
and
the
same
thing.
We
need
to
instantiate
the
api
provided
by
the
operators.
So,
let's
just
go
and
instantiate
in
the
default.
B
Namespace,
so
this
provides
the
infrastructure
for
our
application
to
run.
So
one
of
the
things
that
I
had
spoken
about
was
we
are
going
to
post
the
message
to
a
kafka
queue.
So
since
we
are
using
the
openshift
platform,
we
can
also
deploy
a
kafka
queue
on
openshift.
So
let's
take
advantage
of
that
as
well.
So
to
do
that
there
is
a
string
frenzy
operator
available
on
the
operator
hub,
so
you
can
just
go
and
install
that
one
in
the
defaults.
C
So
maybe
samia
would
you
do
this.
I
can
because
that
that
can
be,
that
that
can
be
confusing
so
trigger
mesh
itself
is
not
it's
not
a
streaming
platform,
so
we,
we
are
a
messaging
platform,
so
we
need.
We
need
a
messaging
platform.
Okay,
canadian
eventing
same
thing,
canadian
eventing
doesn't
do
messaging.
It's
a
set
of
api
constructs
that
allow
you
to
to
build
some
eventing
flow
right.
The
trigger
mesh
uses
connective
eventing.
C
We
have
additional
bits
like
the
aws
event
sources,
for
which
we
provide
support
right,
but
under
the
hood,
if
you
need
messaging,
you
need
something
like
kafka,
right
or
nuts,
or
you
know,
rabbit
mq,
maybe
or
kinesis
right.
So
here
when
you're
on
premises,
it
makes
total
sense
to
use
kafka.
That's
why
you
know
samir
is
installing
a
little
kafka
cluster
through
the
strimzy
operator,
yeah.
B
So
now
that
I
have
installed
this
stringy
operator,
I
can
go
and
instantiate
the
kafka
cluster,
which
is
pretty
cool.
This
is
all
you
need
to
do
to
create
a
kafka
cluster
right.
C
C
B
Your
skin
as
a
source,
so
this
is
what
manifests
of
the
kinesis
source.
Look
like
very
minimal
you.
We
just
specify
the
arn
of
the
knss
team
and
then
the
credentials
to
access
the
aws
service,
and
then
this
sync,
where
the
events
so
when
an
event
is
picked
up
from
the
guinness
data
stream.
It
is
sent
to
a
sync
element
and
in
this
case
in
the
default
broker
sample
the
before
sample.
It
is
sent
to
the
default
broker
on
canadian.
B
C
That
one
is
inter
so
that
one
is
interesting,
so
you
see
that
the
trigger
mesh
event
sources
are
a
set
of
crds
that
allow
you
to
deploy
event
consumers
for
specific
aws
services.
So
here
it's
kinesis,
so
you
deploy
this
on
your
openshift
cluster,
suddenly
you're
consuming
events
from
kinesis.
Where
do
they
go?
They
go
to
the
the
sync
reference.
So
you
see
in
the
spec,
there
is
a.
There
is
a
sync.
C
If
you
go
back,
if
you
back
just
yeah,
if
you
there
is
a
sink
right,
and
here
that
sync,
you
see
that
it's
a
canadian
broker,
but
it
can
be,
it
can
be
something
else
right,
including
an
open
shift.
Serverless
function.
So
you
know
now
you
suddenly
you
have
a
flow
you're,
consuming
messages
from
kinesis
and
they
can
go
to
your
openshift
serverless.
C
So
basically,
what
you're
gonna
show
here,
which
is
quite
powerful,
is
that
you
you're
gonna,
show
basically
almost
a
sync
between-
and
I
you
know,
my
french
accent
is
saying
sync,
but
s
y
and
c
right,
so
you're
going
to
think
kinesis
and.
B
Kafka,
so
just
complete
my
so
this
kafka
cluster
is
accessible
only
within
the
within
the
openshift
platform.
So
what
I
the
way
I'm
going
to
post
my
messages
to
the
kafka
cluster
is
through
the
http
api.
So
what
I
want
to
do
is
I
want
to
expose
a
rest
bridge
that
is
made
available
by
the
kafka
bridge
component,
so
I
will
instantiate
the
kafka
bridge.
B
B
B
C
So
none
of
this
is
none
of
this
is
really
trigger
much
specific
here.
We
we
we're
doing
it
like
really
step
by
step
from
scratch,
creating
a
kafka
cluster
with
stream
z,
creating
the
the
http.
You
know
proxy,
so
that
we
can
inject
into
kafka.
C
B
Yeah,
so
so,
as
the
bridge
is
coming
up,
let's
go
and
set
up
the
route,
so
the
idea
of
the
route
is
to
be
able
to
access
our
test
proxy
right.
So
I'll
name,
it
the
same
way.
C
B
So
yeah,
so
this
exposes
the
rest
api
proxy
on
this
url.
B
So
the
rest
requests
sent
to
this
url
will
go
to
the
kafka
cluster
right.
So.
B
B
Yeah,
we
have
a
topic
and
we
are
pretty
much
ready.
On
the
other
end,
I
have
set
up
a
data
stream
on
amazon,
kinesis.
B
C
Second
yeah,
there
is
a
question
about.
There
is
a
question
about
dbzm,
so
you
could.
You
could
use
things
like
the
business
kafka
specific.
The
the
thing
you
need
to
the
thing
that's
interesting
here
is
that
this
particular
demo.
We
are
really
talking
about
the
the
kafka
with
with
strimzy,
but
if
you're,
just
using
canada,
tv
venting
all
the
kafka
bits
are
abstracted
right
and
it
could
be
something
else
under
the
hood.
B
Native
so
yeah,
so
I
already
have
set
up
an
kinesis
data
stream
called
my
app
stream
here.
This
is
the
arn
which
will
need,
while
setting
up
our
kinesis
source
initiative.
B
So
let's
just
go
and
start
with
the
demo.
So
now
that
we
have
all
the
components
I'll
just
go
over
the
application,
how
the
application
will
look
a
little
bit
of
canadian
knowledge
is
helpful
here,
but
try
to
follow
along,
not
that
difficult
right.
So
what
we
have
is
the
my
app
stream,
which
is
a
data
data
source
on
aws
kinesis.
B
The
idea
is
to
get
it
from
here
to
the
kafka
queue
so
to
to
achieve
this
flow.
We
are
going
to
use
three
components:
one
is
the
aws
kinesis
source,
which
is
the
one
which
we
just
installed
and
then
we
are
going
like.
I
had
said
that
in
the
use
case
you
want
to
modify
the
message
in
whatever
way
possible,
so
there
is
going
to
be
an
application
logic
component
that
we
are
going
to
use
for
that.
B
We
are
going
to
use
another
trigger
mesh
component
called
infrared
target,
which
is
basically
a
component
that
allows
us
to
write.
Javascript
within
the
declarative,
syntax
of
creating
a
manifest
right,
so
you
can
write
javascript
and
just
write
in
there.
You
can
modify
the
message,
however,
you
or
you
could
implement
your
own
serverless
function
and
deal
with
it
right
from
there.
We
will
be
using
another
component
called
http
sync,
which
is
of
kind
http
target.
B
It
is
similar
to
the
http
source
in
the
difference
is
that
http
target
makes
a
post,
so
you
can
use
it
to
post
post
messages
over
an
http
endpoint
so
which
will
basically
so
this
star.
This
component
will
make
a
post
to
the
kafka
rest
proxy
and
which
basically,
will
the
message
will
then
end
up
in
the
kafka
cube.
If
everything
goes
quite
funny,.
C
So
samir
just
just
go
ahead
and
show
us
show
us
the
source
configure
because
we're
gonna
be
running
a
bit
out
of
time.
C
So
go
for
it
set,
set
everything
up
and
then
you
know
mike,
and
I
you
know
mike.
If
you
have
questions
you
you
and
I
we
can,
we
can
talk
while,
while
samia
gets
everything
set
up
and
then
we
can,
we
can
keep
watching
what
he's
doing.
C
C
Yep
yeah
yeah,
you
could,
you
could
say
it
like
this,
so
here
you
know
we're
really
getting
down
and
down
and
dirty
because
samir
had,
you
know,
created
a
kafka
cluster
and
now
we're
creating
we're
talking
about
transforming
the
events
that
are
flowing
through
the
system,
because
the
kafka
proxy
needs
events
in
a
certain
shape.
So
what
comes
out
of
the
kid
is
this
source
needs
to
be
transformed
before
we
can
send
it
to
the
the
kafka
proxy
and
before
it
gets
accepted.
C
So
so
here
you
know
we're
touching
on
really
all
the
components
to
build.
You
know
an
event-driven
application
in
openshift
using
trigger
mesh
sources
using
openshift
serverless
using
the
kafka
streams,
the
operator
it's
the
the
whole
shebang
right,
the
the
the
strength
of
it
and
you
know
going
back
to
you
know
what
what
we
were
talking
earlier
with.
You
know
some
of
the
the
proof
of
value
that
we're
doing
with
financial
institution.
C
What
they
really
like
is
that,
at
the
end
of
the
day,
your
event
driven
application
is
fully
declared
with
an
api
right.
So
you
know
your
event.
Sources
are
use
a
declarative
api,
your
syncs,
the
user,
declarative
api,
the
triggers
the
transformation,
that's
really
what
you
get
with
trigger
mesh,
and
once
you
have
this
representation,
it's
just
like
your
kubernetes
manifest
stick
them
in
version
control,
source
of
truth
and
then
use
your
ci
cd
system
to
to
manage
them.
C
Use
your
githubs
right
to
you
know:
we've
just
just
got
46
million
to
to
do
github,
so
you
know,
adapt
your
github's
mindset
and,
and
you
can,
you
can
manage
your
even
driven
applications.
The
the
same
way.
A
And,
as
had
a
question
about,
do
you
have
some
security
context
for
the
messaging
mesh
events
in
json.
C
I'm
not
exactly
sure
what
that
question
means,
but
the
you
know
in
canadia
you
can
you
can't
if
you
use
istio
or
you
know
another
service
mesh,
you
can
turn
on
the
mesh.
A
lot
of
people.
Don't
use
the
mesh
that
just
use
the
ingress
capability,
but
if
you
want
you
can
turn
on
the
service
mesh
which
gives
you
mutual
tls
between
between
services.
So
you
have.
You
have
added
security.
B
Yeah
yeah,
so
I
think
I
have
set
up
all
the
components.
So
basically,
the
components
that
are
deployed
are
support
this
architecture.
So
these
are
the
three
components
and
then
there
are
two
channels
in
between.
B
So
if
there
are
channels-
and
there
are
the
components
right
and
then
there
are
subscriptions,
so
if
you
look
at
the
whole
manifest,
there
is
actually
really
nothing
going
on.
The
only
thing
of
interest
should
be
this.
This
is
the
one.
This
is
the
component
where
we
can
just
write
javascript
code
within
our
manifest
and
manipulate
the
event.
So
whatever
javascript
is
supported
by
infra.js,
you
could
stick
it
in
here
and
transform
the
messages
so,
for
example,.
C
That's
a
little
that's
a
little
secret
source
in
trigger
mesh
where
we
have
yeah.
We
have
this
manifest
where
you
can
inject
a
bit
of
json
to
do
a
bit
of
magic.
We
also
have
an
event
transformation,
which
is
fully
declarative
of
bumble
bee.
But
okay
skip
that
one
same
here.
People
are
gonna,
get
scared.
B
So
we
have
all
the
components,
so
the
idea
was
to
pick
messages
from
the
kinesis
stream
and
they
should
end
up
with
the
kafka
cluster.
So
let
me
just
fire
up
a
kafka
consumer.
B
B
Yeah,
so
what
I
have
done
is:
I
have
set
up
a
pc2
instance
which
which
will
post
application
logs
nginx
application
logs
to
my
kinesis
data
stream.
It's
just.
B
I
found
this
nice
tool
called
the
nginx
log
generator,
which
generates
random
nginx
logs,
not
real.
What
I
am
doing
is
that
the
logs
are
written
out
to
temp,
dot,
app.log
and
and
the
aws
kinesis
agent
is
set
up
to
send
those
that
application
log
to
my
abstract,
my
kinesis
string,
simple
as
that,
so
let's
just
start
generating
the
logs.
B
So
so,
if
everything
works
as
expected,
these
logs
should
get
put
into
the
kinesis
data
stream
and
then
from
there
our
source
will
pick
the
data
from
the
killings,
data
stream
and
then
do
all
the
magic
and
send
it
to
the
kafka
which
we
should
see
in
this
top
half
of
the
terminal.
A
And
by
the
way,
sebastian
I've
been
negotiating
with
the
with
the
production
team,
and
I've
negotiated
an
additional
10
minutes
for
us.
If
we,
if
we
want
it.
C
I
think
we're
you
know
we're
almost
there
we're
almost
there
but
yeah.
So
you
know
it
looks
like
it
looks
like
a
lot
right,
but
you
know
it
shouldn't,
be
it
shouldn't,
be
underestimated.
Right.
Samir
did
a
bunch
of
stuff.
Here
he
set
up
a
kafka
cluster
in
openshift
with
trimsy
deployed
the
kafka.
Http
proxy
created
a
topic.
He
installed
open,
achieved,
serverless,
openshift
event,
eventing
right,
which
are
canadian
components
under
the
hood.
It
deployed
the
trigger
mesh.
You
know
the
trigger
mesh
operator
right,
aws
sources
operator.
C
So
now
we
can
actually
sync
up
kinesis
and
and
kafka
right.
So
now
he's
emitting
you
know,
messages
to
penises
and
then
now
you
know
if
the
entire
flow
you
know
goes
well,
you
know
we
should
see
them
in
in
kafka.
With
this
little
cube,
cube
control
command.
A
It's
actually
it's
actually
good
to
see.
This
is
actually
live,
though,
as
opposed
to
like
just
a
scripted
demo.
Of
course,
you
know
what
could
go
wrong
right.
B
C
B
C
B
B
C
B
A
C
A
B
My
bad,
so
if
we
wanted
to
verify
that
these
are
actually
the
logs
that
we
are
sending
try
to
do
that,
so
this
is
base64
encoded.
That's
just
a
64.
C
C
C
You
know
if
you,
if
you
want,
you,
can
write
your
own
source
and
directly
do
the
transformation
and
directly
inject
into
into
an
existing
kafka
topic,
and
then
everything
is
packaged
as
a
container
right.
So
certainly
you
can
you
know
you.
Can
you
can
do
it
differently
and
you
can
you
can
do
it
yourself,
but
now
you
know
that's
one
source.
C
So
then
you
need
a
gitlab
event,
source
a
github
event
source
and
then
your
targets,
you
know,
not
only
do
you
need
openshift
serverless,
but
one
day
you're
going
to
want
to
send
everything
to
elasticsearch
and
then
one
day
you're
going
to
want
to
send
everything
to
splunk
right
and
then
you
know
to
you
know
whatever
you
you
know
so
that
that's
where
you
know
we
see
the
we
see
the
strengths
of
trigger
mesh
is
that
we
have
those
catalogue
of
sources
and
targets
and
then
you
can
describe
those
entire
event
flows
in
a
declarative
manner
and
because
you
have
those
flaws
in
a
declarative,
you
know
manifest.
B
You
so
like
celestion
men
said
there
was
hardly
really
hardly
any
real
setup
of
servers
and
things
like
that
done.
If
it
was
a
real
application,
I
would
have
just
had
to
work
on
my
javascript
and
get
the
job
done.
A
Thanks
samir-
and
we
still
have
plenty
of
time
there.
B
I'm
sorry
if
I
went
a
little
fast
I
I
I
was
not
aware
that
sam
was
going
to
do
a
demo,
so
I
had
actually
planned
to
do
the.
A
That's
fine!
It's
all!
It's
all
good!
Thanks
for
coming.
I
know
we're
we're
we're
two
minutes
over
great
session.
Thank
you,
yeah
anyways.
By
the
way,
if
anyone
wants
to
get
caught
up
with
us
from
red
hat,
you
can
send
me
an
email,
just
wait
at
redhat.com
w-a-I-t-e
as
far
as
getting
connected
with
our
folks
from
trigger
mesh.
We
have
our
slide
up
here.
Sebastian.
I
don't
know
if
you
want
to.
C
My
co-founder
mark
is,
is
in
raleigh,
yeah
yeah,
so
no
yeah
send
us.
You
know
a
trigger
mesh
on
twitter.
If
you
need
more
information
on
the
on
the
product,
definitely
reach
out
to
gary
gary
triggermesh.com,
and
then
you
know
visit
triggermesh.com
the
website
we're
doing
regular
webinars.
So
you
know
definitely
feel
free
to
to
reach
out
and
ask
us.
You
know
more
questions.
We
are
in
the
openshift
ibm
reddit
marketplace,
so
you
can
find
us.
You
know
directly.
There.
A
When
when's
your
next
webinar,
I
I
would
have,
I
would
have
asked
you
like:
hey
what
you
know
when
when
is
the
next
event
you're
going
to
be
at?
Are
you
going
to
be
at
it?
You
know
in
amsterdam
or
copenhagen,
but
you
know,
I
think,
we're
going
to
have
to
wait
another
possibly
up
to
a
year
for
that.
But
you
know
tell
us
about
your
webinars
that.
C
You
do
like
yeah
mid-january,
we
we
we're
doing
something,
you
know
with
google,
so
you
know
I'm
not,
I'm
not
sure,
but
we
yeah
there's
going
to
be
some
interesting
webinar
in
in
in
january.
C
Then
you
know,
twitter,
twitter
is
a
good
source
of
information.
We
are
we're
pretty
active
there,
and
so
you
know
you
can
you
can
get
all
the
the
latest
news
or
linkedin?
You
know
as
well.
Of
course.