►
From YouTube: Integrations using Apache Camel and Kafka Zineb Bendhiba, Rachel Jordan, Mara Arias de Reyna Red Hat
Description
Integrations using Apache Camel and Apache Kafka in OpenShift
Zineb Bendhiba, Rachel Jordan, Mara Arias de Reyna (Red Hat)
May 10 2021
https://commons.openshift.org/events.html
A
All
right:
well,
everybody
welcome
to
yet
another
openshift
commons
briefing
today,
as
we
like
to
do
on
mondays,
is
take
deep
dives
into
new
tech
and
today
is
no
different
than
most
mondays.
But
we
have
a
trio
of
folks
here.
We're
going
to
talk
about
integration
in
openshift,
specifically
integrations
using
apache
camel
and
apache
kafka.
A
We
have
zenib,
rachel
and
maria
are
here
and
I'm
going
to
let
them
introduce
themselves
and
tell
us
where
they
are
in
the
world
and
then
there's
going
to
be
a
whole
lot
of
demoing
going
on
today.
So
ask
your
questions
in
the
chat,
wherever
you
are,
whether
you're
on
twitch
or
facebook
youtube
or
here
in
bluejeans
with
us,
and
we
will
relay
the
questions
and
try
and
get
them
all
answered
for
you,
but
without
any
further
ado,
please
take
it
away
and
let's
we
have
a
full
hour.
B
You
hi
so,
yes,
we
are
three
software
engineers
that
are
working
in
apache,
camel
some
way
or
another.
We
both
we
three
work
for
red
hat
engineering
and
we
want
to
tell
you
things
about
how
to
do
good
integrations
in
openshift
or
any
other
kubernetes
like
cluster
native.
B
So
let's
start
with
what
our
integration
frameworks
or
how
to
do
integrations
the
proper
way.
So,
usually,
when
we
talk
about
integrations,
what
we
are
talking
about
is
when
you
are
building
a
software
architecture
and
you
have
different
components,
maybe
databases
it
may
be
apis.
It
may
be.
I
don't
know,
maybe
you
want
to
connect
to
some
ftp
to
some
custom
service
and
you
have
to
define
in
your
architecture
what
is
going
to
be
the
workflow
of
data
between
one
component
to
the
next?
B
Maybe
you
have
to
connect
to
some
component
to
go
to
another
component
and
then
go
back.
Maybe
it's
afro
that
it's
more
lineal,
but
in
the
end
you
have
to
define
or
consider
for
your
software
architecture,
not
only
the
specific
logic
of
how
the
data
or
the
flow
is
going
to
go
from
one
component
to
the
other,
but
also
you
have
to
consider
how
to
connect
to
these
components
and
if
you
take,
for
example,
if
you
want
to
connect
to
salesforce,
you
have
to
learn
how
the
authentication
works,
how
the
api
works.
B
What
are
the
formats?
What
are
the
protocols
and
then
maybe
you
have
to
go
to
a
database,
and
then
you
have
to
learn
how
the
authentication
goes.
How
what
are
the
pool
of
connections
to
connect
to
a
database
and
all
this
is
usually
a
task
that
is
repeated
over
and
over
again
by
many
many
software
engineers
by
many
many
developers,
and
we
write
and
rewrite
the
same
lines
of
code
to
how
to
connect
to
some
component
how
to
connect
to
another.
B
And
yes,
it's
true
that
usually
each
type
of
component
has
their
own
client
library
they
can
use.
But
even
then
you
have
to
learn
how
to
use
that
client
library
and
you
have
to
consider
monitoring
if
they're,
that
you
have
to
upgrade
that
client
library,
maybe
because
the
api
of
the
component
change
or
maybe
because
there
is
some
security
issue
that
forces
you
to
upgrade
the
library
and
then
your
architecture
starts
getting
bigger
and
bigger
and
harder
to
maintain
and
very
coupled
one
component
to
next.
B
So
this
is
what
integration
frameworks
are,
for.
They
are
the
things
in
between
components.
So
you
don't
have
to
worry
about
all
of
these
things.
You
can
forget
about
how
each
component
work
or
how
to
interact
with
each
component
or
if
there
is
any
issue
you
have
to
consider.
For
example,
when
you
connect
to
some
database
any
considerations
you
have
to
do
about
the
encoding,
the
authentication,
security
issues
and
integration
frameworks,
or
at
least
good
integration
frameworks
should
not
only
help
you
connect
to
the
components,
but
also
they
should
help
you
define
the
workflow.
B
So,
for
example,
you
can
define
that
first,
you
go
to
the
a
component,
then
you
go
to
the
b
component
and
now
you
have
a
conditional
and
maybe
you
go
to
the
b
c.
Maybe
you
go
to
the
d
component
and
it
depends-
and
this
is
what
the
enterprise
integration
patterns
are
different
ways
of
defining
workflows.
Maybe
it's
a
loop,
maybe
it's
a
conditional.
Maybe
it's
a
broadcast.
B
There
are
many
different
patterns
for
communications
or
for
creating
workflows,
and
this
is
what
a
good
integration
framework
should
give
you
a
way
to
connect
to
components
transparently
on
a
decoupled
way,
easy
of
course,
and
also
offers
you
the
way
of
defining
the
logic
of
the
workflow
on
a
good
way.
B
We
want
to
talk
to
you
about
apache
comment,
and
this
is
roughly
how
apache
camel
works
on
the
inside.
You
have
different
endpoints
that
are
specialized
into
connecting
to
external
systems.
B
It
may
be
an
endpoint
that
connects
to
our
database
twitter,
facebook,
linkedin,
whatever
this
endpoint,
when
interacting
with
an
external
system,
generates
a
datagram
which
we
call
the
exchange,
which
is
the
message.
This
message
can
have
all
kinds
of
data
inside
and
it
can
have
headers
attributes,
not
only
their
response
from
the
external
system,
but
also
some
attributes
to
give
context,
and
this
message
goes
to
the
router,
which
decides
what
is
going
to
be
the
following
step
and
sends
this
message.
B
Why
do
we
like
apache
camera?
Well,
it's
open
source,
which
is
always
a
good
guarantee
of
good
support,
but
also
it's
very,
very
lightweight.
It
has
more
than
350
types,
different
types
of
connectors,
which
means
it's
difficult
to
find
a
use
case.
It's
not
already
covered,
and
if
you
find
the
use
case
that
it's
not
already
covered,
this
is
open
source.
You
can
create
your
own
connector
and
the
idea
is
that
kamel
offers
you
a
domain
specific
language
to
define
the
workflows,
which
is
very,
very
simple
for
each
step.
B
Usually
it's
just
one
line
of
code,
and
you
can
forget
about
most
of
the
implementation
details
on
how
to
connect
to
that
component
event.
You
can
easily
replace
one
component
with
another,
for
example
at
some
point.
Instead
of
using
an
ftp,
you
want
to
use
an
s3
storage
system
and
it's
very
easy
to
change
it,
because
camel
is
prepared
to
think
on
how
to
connect
easily
different
steps.
This
is
why
apache
camel
usually
gets
called
the
building
blocks
of
software,
because
you
can
easily
define
how
to
connect
one
component
with
the
next.
B
This
allows
you
to
focus
on
your
use
case
logic
and
focus
exactly
on
what
is
what
you
want
to
do?
Not
focus
on
how
to
connect
to
some
external
system
and
learn
how
this
external
system
works.
B
These
are
hello.
World
examples
of
camel
camel
allows
you
to
define
it
has
the
dsl,
but
it
allows
you
to
use
different
languages,
for
example,
here
in
green,
we
see
how
it
will
be
in
javascript,
which
is
just
a
timer
that
every
second
it
prints
a
hello
world
in
a
log
in
blue,
we
see
the
java
version,
which
is
exactly
the
same
as
the
javascript,
because
it's
the
same
dsl,
but
it
has
all
the
decorations
of
public
class,
extend
root
builder
that
it
needs
to
be
interpreted
as
java,
and
then
on
the
orange.
B
We
have
a
general
version
of
the
same
thing
you
can
see.
You
also
have
a
from
a
timer,
then
a
step
that
is
setting
a
body
that
is
hello,
world
and
then
locked
to
an
info.
So
it's
very
very
easy
to
define
any
type
of
workflow
in
camera
and
you
can
choose
the
language
of
your
preference,
the
one
that
you
are
more
comfortable
working
with
when
to
use
apache
camel.
B
I
would
advise
you
to
use
it
always
because
there's
unless
you
have
a
very,
very
contained
application
that
doesn't
interact
with
anything
else,
it
really
helps
you
to
keep
the
the
coupling
very
low
and
it
helps
you
to
keep
your
system
very
easy
to
maintain.
B
But
also
this
is
especially
useful
when
you
have
very
complex
architectures
with
a
lot
of
different
components.
So
you
can
forget
about
caring
how
to
interact
with
all
these
components.
C
Oh
cool,
so
here
comes
my
part,
I'm
going
to
introduce
you
to
the
kamakura
project.
It's
an
apache
como
sub
project.
That
brings
all
the
awesome
integration,
capabilities
of
apache
camo
and
all
the
components
that
are
available
in
the
apache
camo
project
in
the
quarkus
platform.
C
So
to
explain
how
awesome
this
project
is:
let's
have
a
quick
overview
of
what
is
quarkus
and
see
how
camel
fits
into
this
platform
so
quackers,
it's
a
kubernetes
native
javascript
that
is
tailored
for
grav,
vm
and
often
gtk
hotspots
and
its
main
goal
is
to
make
java
run
better
in
those
modern
cloud
native
microservices
and
serviceless
architectures.
C
We
say
that
quarkus
is
a
supersonic
subatomic
java,
because
it's
it
addresses
the
two
main
problems
that
the
java
language
has
in
those
container
based
architectures,
which
are
the
memory
footprint
and
the
starter
startup
time.
C
So
it's
supersonic
because
it's
way
faster
at
startup
than
traditional
java
projects.
We
can
see
here
in
the
slide.
We
have
two
examples
of
comparison
between
a
traditional
java
application
in
a
carcass
on
jvm
and
quarkx.
On
native
for
the
time
to
first
response,
which
actually
includes
the
boot
of
the
app
plus
the
first
response
time
of
the
arrest
endpoints.
So
here
we
have
an
example
of
rest,
endpoints
and
here
a
rest
that
does
something
in
a
database.
C
So
we
can
see
that
when
we
are
caucus
with
jvm,
we
already
have
a
big
difference
in
the
startup
time.
But
when
we
are
on
native
mods
via
graviem,
the
the
difference
is
so
impressive.
So
when
we
are
on
those
environments
like
kubernetes
shift,
we
can
really
gain
a
lot
in
those
environment
and
style
of
architectures
when
we
have
to
deploy
our
apps
very
very
frequently
and
also
that
we
need
actually
to
scale
up
and
scale
down
very
quickly,
our
apps
and
it's
subatomic
because
of
the
lower
memory
footprint.
C
C
Another
benefit
is
the
developer
joy
in
the
quarkus
ecosystem.
They
make
a
lot
of
focus
on
the
developer
experience
and
to
make
things
very
easy,
and
also
there's.
This
awesome
live
reload.
C
That
is
game,
changing
for
a
java
developer
point
of
view,
when
you
can
just
they
run
your
your
code
in
developer
mode,
save
your
your
code
that
you
just
edited
and
it
just
automatically
refreshed
without
you
do
anything
again
in
a
java
ecosystem.
It's
really
something
amazing,
and
it
has
a
very
large
bench
of
standards
and
lots
of
libraries,
all
the
well-known
libraries
you
can
find
them
there
and
you
can
really
find
your
joy
to
do
a
carcass
app.
You
can
find
like
very
a
large.
C
You
can
do
like
everything
you
want
and,
of
course
there
is
our
project
which
is
kama
caucus
that
is
available
there.
Already
in
the
platform,
you
can
have
all
the
integration
capabilities
of
apache
camo
that
are
available
ever
already
in
the
quadcopters
platform,
and
you
can
do
your
integration
with
quercus
and
europe
with
this.
Integration
would
be
well
suited
for
a
kubernetes
environment
and
it
would
take
advantage
or
of
all
the
performance
that
comes
from
quarkus.
C
So
you
have
your
camel
connectors
that
will
have
a
faster
startup,
faster
scale
up
and
down
lower
memory
usage.
C
So-
and
there
are
already
lots
of
extensions
if
we
go
to
the
apache
camo
website
in
the
kama
caucus
section,
there
is
a
page
about
all
the
extensions
that
we
have
and
we
can
see
here
that
we
have
like
already
more,
that
we
have
300
extensions
and
you
have
all
the
information
about
every
extension
that
we
have
so
you
can
like.
C
We
have
like
in
tamil,
there
is
300
somethings
that
we
have
already
like
a
bunch
of
extensions
there,
that
you
are
already
available
to
use
in
the
quackers
platform
and
of
course
we
benefit
from
the
stem
developer
experience
for
quackers
development.
C
So
now
it's
the
demo
time
so,
with
rachel
and
maria,
we
build
a
demo
that
we're
going
to
build
in
different
steps
during
this
presentation
and
what
we're
going
to
do
is
to
have
two
different
connectors.
C
One
of
them
is
gonna
pull
data
from
telegram
and
push
it
to
a
kafka
topic
and
another
one
from
twitter
to
a
kafka
topic,
and
then
we
will
have
another
one
that
will
take
the
aggregated
data
from
kafka
to
elasticsearch
for
future
data
science
usage.
The
idea
of
the
whole
demo
is
to
show
you
different
types
of
camel
connectors
that
we
can
build
for
openshift.
C
So
for
the
first
part,
I'm
gonna
show
you
the
twitter
to
kafka,
and
I'm
going
to
use
camo
quarkx
for
this.
This
gonna
be
the
part
where
I'm
going
to
do
some
java
code.
But
if
you
are
not
a
java
developer,
don't
worry
it's
super
accessible
and
stay
tuned
for
the
rest
of
the
presentation.
C
So
now
let's
go
for
some
codeine.
So
I
have
my
application
here
and
what
I
wanted
to
demo
is
that
I
created
it
from
the
code.quercus.io
and
I
just
selected
the
the
extension.
The
camel
extensions
that
I
want,
which
are
the
twitter,
the
kafka
one
and
I
need
some
logs,
though
I
took
also
the
camo
log,
and
I
just
downloaded
the
zip
and
it's
created
me
an
app
and
yeah.
I
have
already
all
in
the
maven
pump
file.
It
has
already
all
the
dependencies
that
I
need.
C
It
has
also
the
build
info
that
I
have
like
the
native
profile
to
to
build
my
native
app.
It
has
also
the
the
dockerfi,
so
I
don't
have
to
care
about
everything
like.
I
can
just
pick
the
app
and
I
have
a
first
rest
endpoint
just
to
test,
and
I
can
actually
just
run
the
code,
and
I
will
have
my
first
resting
point,
so
I
don't
need
this
resting
point.
I
I
just
so.
C
I
just
gonna
delete
this
class
because
I
don't
need
it
and
what
I
want
to
do
is
to
build.
Actually
it's
with
your
route.
I
just
have
to
mention
that
I've
already
added-
I
don't
know.
If
I'm
I
hope
I
don't
go
past.
I
have
added
some
properties
here
here.
I
have
what
the
twitter
component
needs
as
key,
so
that
it
access
my
account.
My
developer
account
at
twitter
so
that
I
can
go
and
do
some
search
on
the
tweets.
C
So
what
I'm
gonna
do
is
that
I'm
gonna
start
the
the
camel
route
that
is
gonna
do
some
search
from
twitter
and
I'm
gonna.
Do
it
in
a
dev
mode,
so
we're
gonna
see
that
the
code
is
gonna
refresh
automatically.
C
C
So
if
it
gets
something
new,
I
will
know
it,
but
generally
the
consumer
takes
like
the
five
last
tweets
that
have
the
apache
camel
and
then,
if
there
are
new
tweets,
we
will
see
some
new
tweets
coming
so
that
I
have
a
bigger
terminal.
I'm
going
to
my
terminal.
So
I'm
just
going
to
do
the
maven
command
maven
compile
park
is
dev.
C
It
searched
for
my
environments,
variables
that
I
put
because
actually
the
what
I
didn't
say
here
in
the
properties
that
those
ones
are
already
available
as
environment
variables.
So
I
just
put
the
names-
and
I
have
them
already
available
in
my
environment
and
so
the
roots
twitter
search
already
started,
and
here
it
logs,
actually
the
the
last
tweets
that
have
apache
camo
in
them.
C
What
I
want
to
do
actually
for
our
demo-
I
don't
know
if
I
have
it
here,
is
that
to
know
on
the
kafka
topic.
If
the
message
comes
from
twitter
or
telegram,
I'm
just
gonna
change
the
the
body
of
the
message.
So
I'm
as
and
I'm
gonna
leave
my
app
running
here
and
I'm
gonna
go
back
to
my
route
and
I'm
gonna
do
a
step.
Buddy
and
I'm
gonna
transform
it.
But
here
I'm
just
gonna
copy
past
so
that
I
don't
do
any
mistakes.
C
So
now,
if
I
go
back
here,
intelligent
will
auto
save
my
code,
and
here
we
saw
that
it
started
like
without
me
doing
anything
and
now
my
messages
are
in
those
json
formats.
C
C
C
I
am
already
connected
to
my
project
here,
so
I
can
do
a
maven
clean
package
and
I
tell
it
that
I
want
to
deploy
on
kubernetes
and
I'm
gonna
tell
you
how
this
magic
magically
done.
So
what
I
I've
done
is
that
I
I've
added
actually
two
dependencies
one.
Actually,
that
is
the
caucus
openshift
that
will.
C
Let
me
deploy
my
app
to
openshift
and
I
also
added
the
container
image
docker,
because
I
personally
want
my
app
to
be
on
the
container
before
I
just
push
it,
and
here
I
have
some
configuration
about
how
I'm
gonna
build
my
image
and
some
kubernetes
stuff,
like
the
name
that
I
wanna
give
to
my
app
oh,
so
it
actually
already
filled
my
image
and
sent
it
to
my
open
shift.
C
So
let's
go
and
see
so
here
I
am
on
my
openshift
view,
and
here
just
on
the
administrator.
C
I
wanted
to
to
show
you
that
we
have
already
installed
the
strimzy
operator
for
kafka
and
if
we
go
here,
we
have
a
cluster
and
we're
looking
for
the
topic,
and
here
we
have
the
the
topic
that
we're
going
to
use.
So
I'm
going
to
go
to
my
developer
view
and
my
app
is
failing
here.
I
just
put
a
a
a
container.
I
have
put
an
application
that
is
just
consuming
from
this
topic
so
that
I
can
get
the
logs
there
and
the
code
is
is
here.
C
Actually
I
just
did
the
consumer
that
consumes
from
the
kafka
topic
to
vlog.
This
is
just
for
me
actually
to
see
if
everything's
going
from
my
other
app
and
if
I
go
back
to
the
topology
here,
my
apps
doesn't
want
to
start.
C
C
So
what
I
did
is
that
I've
already
created
a
secret
with
all
the
variables
that
I
had
on
my
computer
with
the
kafka,
bootstrap
broker,
url
and
the
four
keys
and
and
secrets
that
I
need
for
my
twitter
account.
C
C
There
is
something
that
I
wanted
to
show
here
is
that
the
app
with
the
twitter
camera
route
started
in
435
milliseconds.
C
C
So
I
I
didn't
expect
that
the
consumers
has
already
some
some
cat
cuttings,
but
this
is
why?
Because
we
are
so,
I
don't
know
if
it
got
some
new
tweets.
Maybe
I
can
tweet
something
yeah.
C
So
I'm
gonna
just
put
the
the
native
build.
I'm
not
gonna
push
it
because
it's
gonna
take
sometimes,
but
actually
it's
the
same
command,
and
what
we
do
is
that
we
give
that
a
native
profile
that
we
have
already
in
our
app
and
it's
gonna
take
more
time
to
build
because,
like
we,
we
saw
in
the
the
slides
when
we
are
on
the
native.
C
We
have
a
lower
memory
footprint
and
faster
startups,
though
so
there
is
a
whole
work
done
in
this
phase
so
that
we
analyze
the
code
and
put
just
what
we
need
for
the
app,
but
I
have
already
created
a
docker
image
for
it.
So
I'm
just
gonna
stop
this
one.
Maybe.
C
C
And
the
deployment
was
very
quick
and,
like
you
can
see
here,
it's
the
same
up,
the
same
camera
roads,
but
instead
of
starting
in
435
milliseconds,
it
started
in
only
7
minutes
again
and
it's
already
already
getting
some
some
tweets.
So
this
is
it.
Instead,
if
someone
wants
to
tweet
something-
and
we
see
it
live,
I
can
I
can
stop
sharing
and
and
it's
for
rachel
now.
C
D
D
Well,
when
you
look
at
the
big
picture,
mainly
the
development
processes,
it's
a
lot.
It's
a
lot
to
learn.
It's
a
lot
to
do,
because,
if
you
think
about
it,
the
majority
of
the
time
I
spend
handling
dependencies
and
doing
things
like
preparing
for
deployment
to
openshift
or
kubernetes,
you
have
to
configure
docker
or
s2i.
D
You
have
to
create
a
container
build
the
image.
All
of
that
can
get
pretty
daunting.
So
so
we
wanted
to
create
something
specifically
made
for
serverless.
That
is
also
smart
enough
to
do
those
kind
of
repetitive
and
time-consuming
tasks
for
us,
but
at
the
same
time
we
also
wanted
it
to
work
natively
on
kubernetes
and
even
more
importantly,
we
wanted
to
lower
the
barrier
to
entry
and
to
eliminate
a
lot
of
the
associated
complexity
and
to
make
it
easier
for
people
to
to
learn
and
kind
of
to
pick
up
on
it.
D
D
I
for
these
architectural
trends
and
changes,
and
just
like
with
the
quercus
project,
a
sub-project
of
apache
camel
was
created
so
that
you
get
the
same
benefits
from
it
except
this
is
well
it's
native
to
native
to
kubernetes
as
well
and
specifically
for
serverless
and
as
a
result,
it
was
it's
called
camo.
Okay,.
D
So
what
exactly
is
come
okay
and
how
does
it
work
come?
Okay
runs
on
top
of
corki's.
First
of
all,
it
enables
developers
to
write
very
small
fast
job
applications
like
you
just
saw
one
of
the
biggest
benefits,
I
think,
is
that
camo
k
handles
camel
dependencies
for
you,
which
is
a
huge
win
and,
and
of
course
it
removes
also
the
need
to
configure
docker
or
s2i
before
deploying
to
openshift
or
kubernetes.
D
That
means
that
you
can
then
continue
to
focus
on
writing
integrations
and
just
using
the
pretty
much
already
really
simple,
camel,
dsl
or
domain
specific
language
no
need
to
worry
at
all
about.
You
know
how
am
I
going
to
package
it
reply,
redeploy
it
and,
and
that
kind
of
thing,
so
it's
straightforward
enough
to
make
a
kubernetes
native
integration
application
using
something
like
camel
k.
D
So
operators,
as
probably
everybody
here
knows,
but
operators
are
commonly
used
to
install
and
configure
applications
or
platforms,
whether
it
be
on
kubernetes
or
openshift
and
they're.
The
digital
version
kind
of
of
the
traditional
human
operator
that
they
used
to
just
do
all
of
this
manually,
so
they
would
have
to
to
install
dependencies
and
everything
for
for
applications,
whether
it
be
in
a
legacy
environment
and
that
kind
of
thing
just
making
sure
everything
is
in
place
for
the
application
to
be
able
to
run
and
to
do
its
job.
D
D
So
this
this
kind
of
list
here
is
just
to
give
you
a
general
idea
of
all
of
the
things
that
become
okay
operator
does
and
how
much
time
it
will
really
save
you,
and
actually
the
main
responsibility
of
the
camel
k
operator
is
to
look
for
camel
k,
integrations
deployed
using
cable
and
to
build
and
deploy
them
as
kubernetes
applications.
D
Just
as
straightforward
as
that,
and
all
of
that
is
really
possible
because
of
the
operator
sdk,
so
it's
it
basically
performs
the
operations
on
the
kubernetes
resources
that
are
are
needed
to
run
the
the
camel
dsl
script
and
part
of
that
is
just.
It
defines
several
new
kubernetes
apis.
It
extends
the
the
custom
resources
so
in
other
words
the
operator
scans
your
application
and
creates
the
resources
that
you
need
in
the
cluster
automatically.
D
The
three
kind
of
main
concepts
have
come:
okay.
Well,
we
already
discussed
mostly
the
camouflage
operator,
it's
basically
the
intelligence
that
coordinates
all
of
the
moving
parts
where
each
custom
resource
has
its
own,
like
dedicated
state
machine
that
orchestrates
their
phases
and
there's
also
the
run
time,
which
provides
the
functionality
to
be
able
to
actually
run
the
integration
and
then
there's
traits
I
traits.
D
I
won't
get
too
into
it's
a
more
kind
of
advanced
concept,
but
the
general
idea
is
that
you
can
just
you
can
customize
the
behavior
of
the
operator
and
the
the
runtime,
but
typically
for
most
people,
I,
the
the
defaults,
are,
are
sufficient,
but
just
so
that
you
know
it's
possible
for
an
experienced
user
to
to
modify.
D
So
to
get
started
with
cam,
okay,
first,
you
you
need
to
have
you
need
to
be
logged
into
a
cluster.
You
have
access
to.
You
have
to
install
the
camel
binary
put
into
your
system
path
and
you
need
to
run
camel,
install
to
install
to
install
it
and
that
will
configure
the
cluster
for
you
with
custom
resource
definitions
and
install
the
operator
in
the
name
space,
and
that's
it
pretty
much.
D
So
writing
your
first
camel
k.
Integration
is
incredibly
simple.
The
first
thing
you
do
is
just
create
your
integration
file.
Tamil
k
currently
supports
a
bunch
of
languages
just
off
the
top
of
my
head
java,
kotlin
groovy,
xml,
even
javascript,
and
that's
quite
important
to
somebody
like
me,
because
I
have
to
use
javascript
more
often
than
not,
and
I
wanted
something
that
was
was
going
to
be
easy
to
work
with,
and
that
was
wasn't
like
a
joke,
and
so
this
was
a
very
low
barrier
to
entry
for
for
myself
as
well.
D
But
what's
probably
what's
really
cool
about
this.
That,
I
should
probably
say
is
that
it's
able
to
just
kind
of
materialize
and
started
the
integrations
within
just
a
few
seconds
that
helps
a
lot
during
the
development
phase
because
you
get
well.
You
know
you
get
like
immediate
feedback
on
code
and
you
can
make
you
can
make
changes
right
away.
D
D
D
I
come:
okay
provides
a
lot
of
features
when,
when
it's
run
on
k
native
and
if
you're
not
familiar
with
k
native,
I'm
not
going
to
go
too
into
depth
with
it,
but
I
it
basically
gives
you
serverless
capabilities
on
kubernetes,
and
there
are
three
major
areas
in
k
native
there's
the
the
build
area
that
provides
you
with
the
custom
resources
and
the
canadian
serving
area.
D
It
just
gives
you
an
easy
way
to
trigger
your
functions
at
the
same
time
and
to
orchestrate
services,
but
I
think
the
thing
that
really
makes
tamil
k
shine
here
is
that
your
service
also
just
receives
the
the
messages
through
incoming
cloud
events,
which
means
that
you
don't
have
to
actively
connect
to
the
broker.
So
the
service
ends
up
being
quite
passive,
and
actually
the
k-native
trait
automatically
discovers
the
addresses
of
k-native
resources
and
injects
them
into
the
into
the
running
integration.
D
And
if
you
have
an
existing
integration
already
kamika
integration,
then
it's
possible
to
run
it
as
a
k,
native
serverless
service.
D
So,
with
serverless
becoming
a
popular
architectural
style,
you'll
see
many
examples,
but
but
it's
important,
I
think,
to
remember
that
you
don't
need
to
use.
I
come
okay
for
server
list,
just
using
it
alone
or
even
to
deploy
a
quercus
app
is
very
common
and
useful
thing
to
do
and
to
not
get
overwhelmed
with
all
the
technologies
just
because
they
really
they
work
really
well
together,
doesn't
mean
that
they're
dependent.
D
Also
for
camelcase
the
possibility
to
set
up
monitoring,
and
so
that
can
be
done
for
both
the
integration
and
the
operator,
and
I
believe,
for
the
integration.
If
you
have
openshift
already,
then
it's
the
prometheus
operator
is
already
deployed
as
part
of
the
openshift
monitoring
and
so
for
to
monitor
the
the
operator
you.
You
would
just
do
it
at
that
at
the
moment
that
you're
installing
camel
k
and
then,
of
course
you
can,
then
you
can
set
up
alerting
and
you
can.
D
You
can
visualize
collected
data
using
something
like
grafana
or
or
some
other
api,
consumer
and
and
quite
important
is
also
that
k
helps
with
with
transformation.
So
adding
a
transformation
is
as
simple,
as
you
just
add,
a
line
or
two
to
your
to
your
camel
dsl
or
to
your
integration.
D
Something
like
converting
the
outgoing
body
to
upper
case
would
be
an
example.
You
would
just
add
it
to
a
step
and
you
can
have
as
many
steps
as
you
like.
D
So
I'll
be
doing
just
the
teensy
tiny
demo,
I
is
following
the
theme
of
adding
camel
sightings
that'll,
get
eventually
end
up
in
the
kafka
topic
or
not,
eventually
immediately
end
up
in
a
kafka
topic.
This
time
I'll
be
reporting
my
sightings
through
through
telegram,
so
I've
already
created
the
telegram
bot,
but
but
it's
very
easy
to
do-
you
can
create
it
in
in
under
a
minute
or
or
so
I'll,
leave
the
link
in
the
slides,
okay,.
D
So
I've
already
kind
of
lazy
lever
in
the
integration
here,
it's
it's
written
in
in
javascript
to
kind
of
change
it
up,
but
don't
get
too
jealous.
D
They
look
almost
exactly
the
same
in
java,
and
so
here
this
is
just
showing
that
where,
where
we're
starting
from
so
the
input
is
coming
from
a
telegram
bot,
we
would
add
our
authors,
our
authorization
token
here
set
the
body
and
here
we're
marshalling
it
to
converting
it
to
json,
but
but
with
kafka,
which
is
going
to
be
where
we're
sending
it
to
it's,
not
really
necessary
because
it
does
it
automatically,
but
but
yeah
then
then
over
here.
D
This
is
where
we're
piping
it
to
the
the
kafka
topic
setting
setting
the
body,
and
then
we
are
sending
back
to
telegram.
Thank
you
for
reporting
your
camel
sighting.
D
D
It
also
synchronizes
the
source,
changes
and
reloads
the
camel
context
automatically,
which
I'll
show
you
it's
doing
a
lot
at
this
point
and
building
an
integration
kit,
and
so
on.
So
from
here
we
go
to
the
the
openshift
console
we're
in
the
administrator
view.
Let's
go
to
developer
apology
view,
and
we
can
see
here
that
the
integration
is
running.
D
D
D
So
if
I
were
to
do
go
back
over
here.
D
D
Of
course
it's
not.
I
cannot
update
all
right.
D
That's:
okay,
that's
okay,
but
yeah.
You're
gonna
have
to
take
my
word
for
it.
It
will.
It
will
update
here
and
say
have
a
nice
day.
I
will
not.
I
will
not
stop
this
integration
now,
just
to
show
you
that
so
but
yeah.
Another
thing
is
that
you
can
also
just
get
the
status
at
any
point,
so
this
will
show
you
the
running
integrations
and
of
course,
I'm
not
going
to
do
it
right
now,
but
you
can
also
camel
delete
and
it
will
delete
whichever
integration
you
specify.
D
And
with
that,
I
will
just
leave
you
here
with
just
to
point
out
that
I've
left
a
quick
summary
and
some
and
some
resources
here
and
I'll
leave
it
to
my
colleague
now
to
continue.
B
So
I
hope
you're
seeing
my
screen,
so
we
have
seen
that
if
you
want
to
build
complex
software
architectures,
you
should
use
apache
camel
because
it
it
makes
very
easy
to
interconnect
things.
You
have
seen
that,
for
example,
creating
a
telegram
bot
is
really
really
stupid
in
four
lines
of
code.
You
can
create
a
telegram
bot
and
that's
it.
You
don't
need
anything
else.
Well,
you
need
to
add
the
code
for
your
logic.
B
Obviously,
if
you
add
commands
to
the
telegram
board,
you
will
have
to
generate
the
code
that
reacts
to
those
commands,
but
the
the
part
of
how
to
integrate
with
the
telegram
bot.
How
is
the
telegram
bot
api?
I
don't
know
we
don't
really
need
to
know.
We
just
rely
on
comments
if
telegram
updates
their
api
or
how
they
interact
with
the
with
the
bots.
B
You
don't
care
to
just
upgrade
camel
camera
would
take
care
of
it.
It
will
work,
so
it
really
really
makes
it
very
easy
to
develop
things
that
different
third-party
components
interacting
it
then
we
saw
that
well
camel
is
running
over
java
java.
Sometimes
is
not
the
most
fastest,
the
not
the
best
for
serverless.
B
B
B
The
idea
is
that
well
for
creating
a
camel,
workflow
or
a
camel
integration.
You
usually
use
a
lot
of
different
pieces,
and
maybe
if
you
are
focusing
only
on
your
the
logic
of
your
use
case,
it
may
not
be
as
nice
as
it
could
be.
For
example,
if
you
are
a
scientist,
analyzing
camel
sightings
in
all
over
the
world
through
a
telegram
board
and
a
twitter
api,
you
want
to
be
able
to
integrate
that
with
your
machine
learning
platform
or
whatever
easily,
and
you
don't
want
to
worry
about
how
to
interact
with
it.
B
B
That
is
the
api
that
your
scientist
is
using
to
add
new
data
or
run
analysis
or
whatever.
But
your
scientist
is
not
a
developer
is
a
scientist,
so
they
don't
really
know
how
to
call
the
api.
B
Well,
don't
worry,
you
can
create
a
camel
snippet
that,
on
a
transparent
way
simplifies
a
lot
all
these
calls
to
the
api
or
whatever
is
the
workflow.
Maybe
it's
not
one
step
it's
more
than
one
step,
but
you
can
simplify
it,
so
your
scientists
can
build
their
own
workflows
on
a
very
easy
way
with
not
so
many
building
blocks.
So
it's
like
a
meta
block.
B
This
is
a
very
native
concept.
I
think
so
it
has
camel.
Usually
has
this
consumer
producer
definition
depending
on?
If
the
end
point
is
reading
data
from
the
outside
or
writing
data,
and
here
cameras
have
a
similar
idea,
so
you
have
source
gamelets,
which
takes
data
from
the
outside
and
sync
amoled
switch
write
data
somewhere.
So
it's
like
two
different
types
of
steps
and
you
usually
create
a
source,
kamelet
snippet
that
reads
the
data
and
a
source
comment.
B
Snippet
sync
comment:
snippet
that
writes
the
data
and
you
put
first
a
source
then
async
and
join
them,
and
you
usually
have
only
two
steps
on
the
on
your
workflow,
so
you
are
painting
a
source
with
a
sink.
B
The
idea
is
to
simplify
workflows
even
more
to
be
able
to.
At
the
beginning.
I
told
you
this
open
source.
If
there
is
some
connector
that
it's
not
available,
maybe
you
want
to
create
your
own
connector,
but
that
means
you
have
to
implement
it
in
java.
Maybe
you
are
not
java
developer.
Maybe
you
are
a
jamal
devops,
maybe
you're
a
javascript
developer
and
you
don't
want
to
work
with
java,
but
if
there
is
a
way
of
defining
how
this
connector
would
work,
for
example,
telegram
uses
http
apis
behind,
so
you
could
create.
B
B
The
elasticsearch
camelet
is
not
yet
it's
only
on
the
snapshot
version,
it's
not
on
the
release
version,
but
it's
going
to
be
on
the
next
one,
because
it's
already
committed
so
it's
there
and
this
code
you
see
at
the
left
is
everything
you
need
to
write
on
your
file
to
connect
from
the
kafka
to
elasticsearch.
B
The
name
is
going
to
be
kafka
too
as
binding,
then
you
can
you
define
the
source,
which
is
going
to
be
a
kafka
source
to
define
the
url
on
the
topic,
no
username
or
password
in
this
case,
and
then
the
sync,
which
is
where
I'm
going
to
write
the
data
which
is
going
to
be
elasticsearch,
and
here
I
have
to
define
the
url,
the
cluster
name,
the
index
name,
which
is
how
elasticsearch
defines
the
index
and
the
cluster.
Where
you
want
to
write
to
and
then
the
username
and
password
and.
B
Of
course,
I'm
not
going
to
show
you
my
password,
but
what
I'm
going
to
do
is
just
use
this
file
with
a
proper
password
and
add
it
to
the
cluster.
This
is
the
current
topology.
B
B
B
B
So
if
I,
for
example,
retweet
something
with
apache
comment,
I
should
see
that
here
when
the
the
twitter
thing
queries
it,
which
I'm
not
sure
it's
going
to
be
here,
it
is
from
twitter.
I
have
this
boot,
so
let's
check
that
this
is
really
on
elasticsearch.
Well,
I
have
here
things
from
last
week
when
we
were
testing
things,
but
if,
for
example,
I
search
telegram
latest,
I
see
a
camel
in
my
desk.
B
If
I
search
for
twitter
latest
the
one
I
just
retweeted
with
the
pineapple-
and
I
don't
know
if
we
have
been
already
one
hour
talking
so
maybe
it's
time
for
just
opening
questions
or
chatting
a
bit
about
this
about
camel.
We
can
talk
hours
and
hours
about
this
topic,
but
I
think
it's
better.
If
just
we
do
like,
we
did
just
review
a
bit.
The
state
of
the
art
of
camille,
starting
with
trustee
classic
traditional
camel,
moving
to
camel
quarkus,
then
kamelcade
and
kamelets,
and
then,
let's
see
what
people
want
to
hear
about.
B
Well,
what
I
see
right
now
is
that
we
are,
we
are
pushing
a
lot
in
the
serverless
side,
much
more
than
in
previous
years,
that
the
camel
quercus
is
very
active.
I
think
zineb
can
talk
there
more
about
our
activities.
B
Camille
parker's,
I'm
right
now,
working
on
the
camelette
side,
which
is,
as
I
said,
the
concept
was
born
last
year
at
the
end
of
last
year,
and
there
is
some
very
good
articles
from
nikola
explaining
where
this
comes
from
and
where
this
is
going
and
we
only
have
like
20
camelets,
comparing
with
the
350
something
connectors
on
camel.
B
So
this
is
a
very
small
subset
of
camelets,
but
I
think
it's
it's
improving
a
lot
and
it's
going
to
be
very
visual
with
this
topology
view
in
openshift,
so
you
can
very
very
easily
connect
different
snippets
of
code
and-
and
I
don't
know
in
my
experience
or
what
I
feel
is
that
some
people
still
think
that
camel
is
not
as
easy
to
start
with.
B
Even
if
I
see
it
very
easy
now,
but
okay,
maybe
not
as
easy
to
start
with,
and
the
camelot
thing
is
going
to
push
a
lot
on
on
making
it
even
easier,
because
you
can
now
separate
your
development
team
that
can
create
the
gamelets
from
the
people
that
are
going
to
use
those
camelets
that
maybe
scientists,
maybe
analysts.
Maybe
I
don't
know
whoever
needs
to
build
integrations.
A
That
that
sounds
like
it's
going
to
be
a
really
interesting
community
to
work
in
too
is
creating
the
new
camera,
maybe
where,
where
should
people
come
to
find
a
space
to
collaborate
on
creating
new
camelots?
That
might
be
a
good
thing
to
tweet
or
put
into
the
yes.
B
So
we
have,
of
course
we
have.
Everything
is
open
source,
so
we
have.
B
The
apache
camelcamelet-
let's
put
it
on
the
chat,
so
we
can
tweet
it
later.
This
is
the
community
repository
of
all
the
camelets
that
are
there.
I
see
that
there
has
been
many
in
the
latest
days
even
hours
ago,
so
this
is
this.
Is
this
is
getting
a
lot
of
speed
in
increasing
more
camelets
every
day
and,
of
course
the
the
apache
camel
website
is
the
place.
You
should
go
first
to
see
any
kind
of
documentation
we
have.
B
We
have
different
top
places
for
quarkus,
kamelets
and
kamel
k
and
even
apache
kafka,
connector
kafka
connector,
which
is
something
that
is
also
merging
kafka
and
camel
and
is
there
and
has
its
uses,
but
we
prefer
not
to
introduce
that.
Also
on
this
talk
because
it
will
be
just
too
much.
A
Yeah,
well,
I
think
that
means
we're
going
to
have
to
have
you
guys
back
to
do
another
one
and
because
this
has
been
like
it's.
This
is
amazing.
The
the
demos
of
like
some
and
shining
the
light
on
these
future
integrations
and
what's
what's
available.
Now
is
pretty
amazing,
and
you
know
you
shouldn't
hesitate
to
get
involved
in
the
camel
universe
and
carcass
universes.
I
think
we've
that
its
time
has
arrived.
A
Zinib
did
you
want
to
add
any
final
words
around
where
you
see
things
going
these
days.
What's
next
for
your
adventures,.
C
For
me,
I'm
on
the
kamakaku
side.
For
now
there
is
this
carcass
2
that
x,
that
we
were
working
on
and,
of
course,
like
the
whole
team,
did
an
amazing
job
to
to
have
so
many
extension
there
and
there's
still
some
work,
because
some
of
them
have
not
ported.
Now
on
native
and
there's
always
so
much
work,
and
if
you
want
to
get
involved
in
the
community
just
come
and
see
and
talk
to
us
on
the
mail
on
the
zulip
chat
and
yeah.
There's
lots
of
work
to
do.
D
Thanks,
I
think
well
recently,
1.4
was
just
released.
A
lot
of
the
focus
has
been
around
kamlitz.
Actually,
so
a
lot
of
what
maria
has
said
is
kind
of
where
kamal
k
is
moving
towards.
We
just
added
the
the
bind
sub
command
as
well,
which
helps
you
to
use
cam
lids
and
directly
whenever
you
need
them,
but
but
yeah,
and
also
we're
exploring
kind
of
the
user
interface
side
of
things
like
seeing.
D
A
Well,
as
the
kamlet
world,
spins
and
and
and
develops,
will
definitely
have
you
guys
back
and
maybe
even
a
walkthrough
of
creating
and
contributing
to
hamlet
might
be
a
great
thing,
a
future
topic
to
have
you
by
back
back
on
and
I'm
thrilled
with
with
the
depth
of
the
content
and
the
demos.
So
this
is
really
good.
It's
one
of
the
best
overviews
I've
seen
explaining
the
whole
camel
universe.
So
thank
you
very
much
for
this.
I'm
not
seeing
thank.
B
A
Questions
in
the
chat
I'm
just
going
to
pause
and
see
if
chris
has
found
any
in
all
of
the
other
places
where
we're
testing
none
yet
so
you
have
answered
all
the
questions
or
you've
left
them
with
just
enough
mystery
that
they'll
go
off
and
explore
for
themselves.
So
thank
you
very
much
for
taking
the
time
I
know
you're
in
london
and
spain
and
france
and
time
zones
are
always
a
fun
thing,
but
we
totally
appreciate
you
coming
and
we'll
definitely
have
it
back
yeah.
This
is
a
big.
A
It
was
a
big
demo,
but
it's
a
great
thing
to
try
and
break
camel
down
into
these
pieces
and
parts
very
digestible.
So
thanks
and
we'll
talk
to
you
all
again
soon,
thanks
everybody
for
coming
thanks
for
having.