►
From YouTube: OCB: Integration in OpenShift Zineb Bendhiba, Rachel Jordan-McGeever, María Arias de Reyna (Red ...
Description
In this briefing, Zineb Bendhiba, Rachel Jordan-McGeever, and María Arias de Reyna will discuss how to create integrations and automations using Apache Camel and Apache Kafka in OpenShift.
We will dive into Camel Quarkus, Kamelets, and the Kafka interactions with Camel, shining some light into the future of Integrations Frameworks.
A
All
right:
well,
everybody
welcome
to
yet
another
openshift
commons
briefing
today,
as
we
like
to
do
on
mondays,
is
take
deep
dives
into
new
tech
and
today
is
no
different
than
most
mondays.
But
we
have
a
trio
of
folks
here.
We're
going
to
talk
about
integration
in
openshift,
specifically
integrations
using
apache
camel
and
apache
kafka.
A
We
have
zenib,
rachel
and
maria
are
here
and
I'm
going
to
let
them
introduce
themselves
and
tell
us
where
they
are
in
the
world
and
then
there's
going
to
be
a
whole
lot
of
demoing
going
on
today.
So
ask
your
questions
in
the
chat,
wherever
you
are,
whether
you're
on
twitch
or
facebook
youtube
or
here
in
bluejeans
with
us,
and
we
will
relay
the
questions
and
try
and
get
them
all
answered
for
you,
but
without
any
further
ado,
please
take
it
away
and
let's
we
have
a
full
hour.
B
Thank
you
hi,
so,
yes,
we
are
three
software
engineers
that
are
working
in
apache,
camel
some
way
or
another.
We
both
we
three
work
for
red
hat
engineering
and
we
want
to
tell
you
things
about
how
to
do
good
integrations
in
openshift
or
any
other
kubernetes
like
cluster
native.
B
So
let's
start
with
what
our
integration
frameworks
or
how
to
do
integrations
the
proper
way.
So,
usually,
when
we
talk
about
integrations,
what
we
are
talking
about
is
when
you
are
building
a
software
architecture
and
you
have
different
components,
maybe
databases
it
may
be
apis.
It
may
be.
I
don't
know,
maybe
you
want
to
connect
to
some
ftp
to
some
custom
service
and
you
have
to
define
in
your
architecture
what
is
going
to
be
the
workflow
of
data
between
one
component
to
the
next?
B
Maybe
you
have
to
connect
to
some
component
to
go
to
another
component
and
then
go
back.
Maybe
it's
a
flow
that
it's
more
lineal,
but
in
the
end
you
have
to
define
or
consider
for
your
software
architecture,
not
only
the
specific
logic
of
how
the
data
or
the
flow
is
going
to
go
from
one
component
to
the
other,
but
also
you
have
to
consider
how
to
connect
to
these
components
and
if
you
take,
for
example,
if
you
want
to
connect
to
salesforce,
you
have
to
learn
how
the
authentication
works,
how
the
api
works.
B
What
are
the
formats?
What
are
the
protocols
and
then
maybe
you
have
to
go
to
a
database,
and
then
you
have
to
learn
how
the
authentication
goes.
How
what
are
the
pool
of
connections
to
connect
to
a
database
and
all
these
is
usually
a
task
that
is
repeated
over
and
over
again
by
many
many
software
engineers
by
many
many
developers,
and
we
write
and
reply
the
same
lines
of
code
to
how
to
connect
to
some
component
how
to
connect
to
another.
B
And
yes,
it's
true
that
usually
each
type
of
component
has
their
own
client
library
they
can
use.
But
even
then
you
have
to
learn
how
to
use
that
client
library,
and
you
have
to
consider
monitoring
if
there
that
you
have
to
upgrade
that
client
library,
maybe
because
the
api
of
the
component
change
or
maybe
because
there
is
some
security
issue
that
forces
you
to
upgrade
the
library
and
then
your
architecture
starts
getting
bigger
and
bigger
and
harder
to
maintain
and
very
coupled
one
component
to
next.
B
So
this
is
what
integration
frameworks
are,
for.
They
are
the
things
in
between
components.
So
you
don't
have
to
worry
about
all
of
the
things
you
can
forget
about
how
each
component
work
or
how
to
interact
with
each
component
or
if
there
is
any
issue
you
have
to
consider.
For
example,
when
you
connect
to
some
database
any
considerations
you
have
to
do
about
the
encoding,
the
authentication,
security
issues
and
integration
frameworks,
or
at
least
good
integration
frameworks
should
not
only
help
you
connect
to
the
components,
but
also
they
should
help
you
define
the
workflow.
B
So,
for
example,
you
can
define
that
first,
you
go
to
the
a
component,
then
you
go
to
the
b
component
and
now
you
have
a
conditional,
and
maybe
you
go
to
the
b
team.
Maybe
you
go
to
the
d
component
and
it
depends-
and
this
is
what
the
enterprise
integration
patterns
are
different
ways
of
defining
workflows.
Maybe
it's
a
loop,
maybe
it's
a
conditional.
Maybe
it's
a
broadcast.
B
There
are
many
different
patterns
for
communications
or
for
creating
workflows,
and
this
is
what
a
good
integration
framework
should
give
you
a
way
to
connect
to
components
transparently
on
a
decoupled
way,
easy
of
course,
and
also
offers
you
the
way
of
defining
the
logic
of
the
workflow
on
a
good
way.
B
B
You
have
different
endpoints
that
are
specialized
into
connecting
to
external
systems.
It
may
be
an
endpoint
that
connects
to
a
database
twitter,
facebook,
linkedin,
whatever
this
endpoint,
when
interacting
with
external
system,
generates
a
datagram
which
we
call
the
exchange,
which
is
the
message.
This
message
can
have
all
kinds
of
data
inside
and
it
can
have
headers
attributes,
not
only
their
response
from
the
external
system,
but
also
some
attributes
to
give
context,
and
this
message
goes
to
the
router,
which
decides
what
is
going
to
be
the
following
step
and
sends
this
message.
B
Why
do
we
like
apache
camel?
Well,
it's
open
source,
which
is
always
a
good
quality
of
good
software,
but
also
it's
very,
very
lightweight.
It
has
more
than
350
types,
different
types
of
connectors,
which
means
it's
difficult
to
find
a
use
case.
It's
not
already
covered,
and
if
you
find
the
use
case
that
it's
not
already
covered,
this
is
open
source.
You
can
create
your
own
connector
and
the
idea
is
that
camel
offers
you
a
domain
specific
language
to
define
the
workflows,
which
is
very,
very
simple
for
each
step.
B
Usually
it's
just
one
line
of
code,
and
you
can
forget
about
most
of
the
implementation
details
on
how
to
connect
to
that
component
event.
You
can
easily
replace
one
component
with
another,
for
example
at
some
point.
Instead
of
using
an
ftp,
you
want
to
use
an
s3
storage
system
and
it's
very
easy
to
change
it,
because
camel
is
prepared
to
think
on
how
to
connect
easily
different
steps.
This
is
why
apache
camera
usually
gets
called
the
building
blocks
of
software,
because
you
can
easily
define
how
to
connect
one
component
with
the
next.
B
This
allows
you
to
focus
on
your
use
case
logic
and
focus
exactly
on
what
is
what
you
want
to
do?
Not
focus
on
how
to
connect
to
some
external
system
and
learn
how
this
external
system
works.
B
These
are
hello.
World
examples
of
camel
camel
allows
you
to
define
it
has
the
dsl,
but
it
allows
you
to
use
different
languages.
For
example,
here
in
green,
we
see
how
it
will
be
in
javascript,
which
is
just
a
timer
that
every
second
it
prints
a
hello
world
in
a
lock
in
blue,
we
see
the
yellow
version,
which
is
exactly
the
same
as
the
javascript,
because
it's
the
same
dsl,
but
it
has
all
the
decorations
of
public
class,
extend
root
builder
that
it
needs
to
be
interpreted
as
java,
and
then
on
the
orange.
B
We
have
a
jam
version
of
the
same
thing
you
can
see.
You
also
have
from
a
timer,
then
a
step
that
is
setting
a
body
that
is
hello,
world
and
then
locked
to
an
info.
So
it's
very
very
easy
to
define
any
type
of
workflow
in
common,
and
you
can
choose
the
language
of
your
preference,
the
one
that
you
are
more
comfortable
working
with
when
to
use
apache
camera.
B
But
also
this
is
especially
useful
when
you
have
very
complex
architectures
with
a
lot
of
different
components,
so
you
can
forget
about
caring
how
to
interact
with
all
these
components.
But
specifically,
if
you
have
a
very
dynamic
architecture,
so
you
have
to
add
and
remove
steps
of
your
workflow
or
you
want
to
be
able
to
add
and
remove
steps
of
your
workflow
easily
like
replacing
an
ftp
with
s3
or
replacing
a
database
with
elasticsearch
or
whatever
you
can
think
of
seriously
using
apache
camel,
will
help
you
a
lot
in
making
this
very
easily.
B
And
now
synep
is
going
to
talk
to
you
about
us.
C
Oh
cool,
so
here
comes
my
part:
I'm
gonna
introduce
you
to
the
kamakurakus
project,
it's
an
apache
como
sub
project.
That
brings
all
the
awesome
integration,
capabilities
of
apache
camo
and
all
the
components
that
are
available
in
the
apache
camo
project
in
the
quarkus
platform.
C
So
to
explain
how
awesome
this
project
is:
let's
have
a
quick
overview
of
what
is
quarkus
and
see
how
kamal
fits
into
this
platform.
C
So
quackus,
it's
kubernetes
native
javascrike
that
is
tailored
for
graviem
and
often
gtk
hotspots
and
its
main
goal
is
to
make
java
run
better
in
those
modern
cloud
native
microservices
and
serviceless
architectures.
C
So
it's
supersonic
because
it's
way
faster
at
startup
than
traditional
java
projects.
We
can
see
here
in
the
slide.
We
have
two
examples
of
comparison
between
a
traditional
java
application
and
a
carcass
on
jvm
and
qarcus
on
native
for
the
time
to
first
response,
which
actually
includes
the
boot
of
the
app
plus
the
first
response
time
of
the
arrest
endpoint.
So
here
we
have
an
example
of
rest,
endpoint
and
here
a
rest
that
does
something
in
a
database.
C
So
we
can
see
that
when
we
are
caucus
with
jvm,
we
already
have
a
big
difference
in
the
startup
time.
But
when
we
are
on
native
mods
via
graph
vm,
the
the
difference
is
so
impressive.
So
when
we
are
on
those
environments
like
kubernetes
open
shift,
we
can
really
gain
a
lot
in
those
environment
and
style
of
architectures
when
we
have
to
deploy
our
apps
very
very
frequently
and
also
that
we
need
actually
to
scale
up
and
scale
down
very
quickly,
our
apps
and
it's
subatomic
because
of
the
lower
memory
footprint.
C
C
Another
benefit
is
the
developer
joy
in
the
quarkus
ecosystem.
They
make
a
lot
of
focus
on
the
developer
experience
and
to
make
things
very
easy,
and
also
there's.
This
awesome
live
reload.
That
is
game,
changing
for
a
java
developer.
Point
of
view,
when
you
can
just
say,
run
your
your
code
in
developer
mode,
save
your
your
code
that
you
just
edited
and
it
just
automatically
refresh
without
you
do
anything
again
in
a
java
ecosystem.
C
It's
really
something
amazing,
and
it
has
a
very
large
bench
of
standards
and
lots
of
libraries,
all
the
well-known
libraries
you
can
find
them
there
and
you
can
really
find
your
joy
to
do
a
quarkus
up.
You
can
find
like
very
a
large.
You
can
do
like
everything
you
want
and,
of
course
there
is
our
project,
which
is
common
caucus
that
is
available
there.
C
Already
in
the
platform,
you
can
have
all
the
integration
capabilities
of
apache
camo
that
are
available
ever
already
in
the
quarakus
platform,
and
you
can
do
your
integration
with
quarkus
and
your
app
with
this
integration
would
be
well
suited
for
a
kubernetes
environment
and
it
would
take
advantage
or
of
all
the
performance
that
comes
from
a
quarkus.
So
you
will
have
your
camel
connectors
that
will
have
a
faster
startup,
faster
scale
up
and
down
lower
memory
usage.
C
So-
and
there
are
already
lots
of
extensions
if
we
go
to
the
apache
camo
website
in
the
comment
caucus
section,
there
is
a
page
about
all
the
extensions
that
we
have
and
you
can
see
here
that
we
have
like
already
more,
that
we
have
300
extensions
and
you
have
all
the
information
about
every
extension
that
we
have
so
you
can
like.
C
We
have
like
in
tamil,
there
is
300
somethings
that
we
have
already
like
a
bunch
of
extensions
there,
that
you
are
already
available
to
use
in
the
crackers
platform
and,
of
course,
we
benefit
from
the
same
developer,
experience
for
quackers
development.
So
now
it's
the
demo
time
so
with
rachel
and
maria.
We
build
a
demo
that
we're
gonna
build
in
different
steps
during
this
presentation
and
what
we're
going
to
do
is
to
have
three
different
connectors.
C
One
of
them
is
going
to
pull
data
from
telegram
and
push
it
to
a
kafka
topic
and
another
one
from
twitter
to
a
kafka
topic,
and
then
we
will
have
another
one
that
will
take
the
aggregated
data
from
kafka
to
elasticsearch
for
a
future
data
science
usage.
The
idea
of
the
whole
demo
is
to
show
you
different
types
of
camel
connectors
that
we
can
build
for
openshift
so
for
the
first
part,
I'm
to
show
you
the
twitter
to
kafka
and
I'm
going
to
use
camo
quarkx
for
this.
C
C
So
I
have
my
application
here
and
what
I
wanted
to
demo
is
that
I
created
it
from
the
code.quarkus.io
and
I
just
selected
the
the
extension,
the
camel
extensions
that
I
want,
which
are
the
twitter.
C
The
kafka
one
and
I
need
some
logs,
so
I
took
also
the
camo
log
and
I
just
downloaded
the
zip
and
it's
created
me
an
app
and
the
app
I
have
already
all
in
the
maven
pump
file.
It
has
already
all
the
dependencies
that
I
need.
It
has
also
the
build
info
that
I
have
like
the
native
profile
to
to
build
my
native
app.
C
It
has
also
the
docker
file,
so
I
don't
have
to
care
about
everything
like.
I
can
just
pick
the
up
and
I
have
a
first
rest
endpoint
just
to
test,
and
I
can
actually
just
run
the
code,
and
I
will
have
my
first
resting
point
now.
I
don't
need
this
resting
point,
so
I
just
gonna
delete
this
class
because
I
don't
need
it
and
what
I
want
to
do
is
to
build
actually
a
twitter
route.
C
I
just
have
to
mention
that
I've
already
added-
I
don't
know
if
I
hope
I
don't
go
past.
I
have
added
some
properties
here
here.
I
have
what
the
twitter
component
needs
as
key,
so
that
it
access
my
account.
My
developer
account
at
twitter
so
that
I
can
go
and
do
some
search
on
the
tweets.
C
So
what
I'm
gonna
do
is
that
I'm
gonna
start
the
the
camel
route
that
is
gonna
do
some
search
from
twitter
and
I'm
gonna.
Do
it
in
a
dev
mode,
so
we're
gonna
see
that
the
code
is
gonna
refresh
automatically.
C
C
So
if
it
gets
something
new,
I
will
know
it,
but
generally
the
consumer
takes
like
the
five
last
tweets
that
have
the
apache
camel
and
then,
if
there
are
new
tweets,
we
will
see
some
new
tweets
coming
so
that
I
have
a
a
bigger
terminal.
I'm
going
to
my
terminal.
So
I'm
just
going
to
do.
The
maven
common
maven
compile
park
is
dev.
C
And
it's
it's
built
in
my
app
and
now
so
here
yeah
it
was
quick,
so
here
the
app
started
it.
It
searched
for
my
variables
that
I
put
because
actually
the
what
I
didn't
say
here
in
the
properties
that
those
ones
are
already
available
as
environment
variables.
So
I
just
put
the
names-
and
I
have
them
already
available
in
my
environment
and
so
the
roots
twitter
search
already
started,
and
here
it
logs,
actually
the
the
last
tweets
that
have
apache
camo
in
them.
C
What
I
want
to
do
actually
for
our
demo-
I
don't
know
if
I
have
it
here,
is
that
to
know
on
the
kafka
topic.
If
that
the
message
comes
from
twitter
or
telegram,
I'm
just
gonna
change
the
the
body
of
the
message,
some
as
and
I'm
going
to
leave
my
app
running
here
and
I'm
going
to
go
back
to
my
route
and
I'm
going
to
do
a
step.
Buddy
and
I'm
gonna
transform
it.
But
here
I'm
just
gonna
copy
past
so
that
I
don't
do
any
mistakes.
C
C
C
I
am
already
connected
to
my
project
here,
so
I
can
do
a
maven
clean
package
and
I
tell
it
that
I
want
to
deploy
on
kubernetes
and
I'm
gonna
tell
you
how
this
magic
magically
done.
So
what
I
I've
done
is
that
I
I've
added
actually
two
dependencies
one.
Actually,
that
is
the
caucus
openshift
that
will.
C
Let
me
deploy
my
app
to
openshift
and
I
also
added
the
container
image
docker,
because
I
personally
want
my
app
to
be
on
the
container
before
I
just
push
it,
and
here
I
have
some
some
configuration
about
how
I'm
gonna
build
my
image
and
some
kubernetes
stuff,
like
the
name
that
I
want
to
give
to
my
app
oh,
so
it
actually
already
filled
my
image
and
send
it
to
my
openshift.
C
I
wanted
to
to
show
you
that
we
have
already
installed
the
strimzy
operator
for
kafka
and
if
we
go
here,
we
have
a
cluster
and
we're
looking
for
the
topic,
and
here
we
have
the
the
topic
that
we're
gonna
use.
So
I'm
gonna
go
to
my
developer
view
and
my
app
is
failing
here.
I
just
put
a
a
a
container.
I
have
put
an
application
that
is
just
consuming
from
this
topic
so
that
I
can
get
the
logs
there
and
the
code
is
is
here.
C
C
C
So
what
I
did
is
that
I've
already
created
a
secret
with
all
the
variables
that
I
had
on
my
computer
with
the
kafka,
bootstrap
broker,
url
and
the
four
keys
and
and
secrets
that
I
need
for
my
twitter
account.
C
And
here
it's
running,
so
if
I
go
to
the
log
this
time
it
if
there
is
no
error,
I
don't
know
if
you
can
clearly-
and
I
have
five
five
tweet
search
that
I
pushed
to
to
kafka.
C
There
is
something
that
I
wanted
to
show
here
is
that
the
app
with
the
twitter
camera
route
started
in
435
milliseconds.
C
I'm
just
gonna
write
it
here
so
that
we
can
see
later
when
we
will
do
the
native
mode.
That
is
going
to
start
quicker.
C
So
I
I
didn't
expect
that
the
consumers
has
already
some
some
cafes,
but
this
is
why?
Because
we
are
so,
I
don't
know
if
it
got
some
new
tweets.
Maybe
I
can
tweet
something.
C
So
I'm
gonna
just
put
the
the
native
build.
I'm
not
gonna
push
it
because
it's
gonna
take
some
time,
but
actually
it's
the
same
command,
and
what
we
do
is
that
we
give
that
a
native
profile
that
we
have
already
in
our
app
and
it's
gonna
take
more
time
to
build
because,
like
we,
we
saw
in
the
the
slides
when
we
are
on
the
native.
C
We
have
lower
memory
footprint
and
faster
startups,
though,
so
there
is
a
whole
work
done
in
this
phase
so
that
we
analyze
the
code
and
put
just
what
we
need
for
the
app,
but
I
have
already
created
a
docker
image
for
it.
So
I'm
just
gonna
stop
this
one.
Maybe.
C
C
C
C
C
C
D
Okay,
great
so
just
a
a
quick
recap.
So
far
we
have
we've
learned
about
the
benefits
of
using
an
integration
framework.
D
We
learned
that
that
camel
is
the
absolute
best
and
most
robust
integration
framework
in
the
whole
wide
world,
and
we
also
learned
about
writing
crazy,
fast
java
applications
using
quarkus
and
how
we
can
use
quercus
extensions
to
to
leverage
all
of
the
benefits
of
of
camel.
So
where
does
that
leave
us.
D
Well,
when
you
look
at
the
big
picture,
mainly
the
development
processes,
it's
a
lot.
It's
a
lot
to
learn.
It's
a
lot
to
do,
because,
if
you
think
about
it,
the
majority
of
the
time
I
spend
handling
dependencies
and
doing
things
like
preparing
for
deployment
to
openshift
or
kubernetes,
you
have
to
configure
docker
or
s2i.
D
You
have
to
create
a
container
build
the
image.
All
of
that
can
get
pretty
daunting.
So
so
we
wanted
to
create
something
specifically
made
for
serverless.
That
is
also
smart
enough
to
do
those
kind
of
repetitive
and
time-consuming
tasks
for
us,
but
at
the
same
time
we
also
wanted
it
to
work
natively
on
kubernetes
and
even
more
importantly,
we
wanted
to
lower
the
barrier
to
entry
and
to
eliminate
a
lot
of
the
associated
complexity
and
to
make
it
easier
for
people
to
to
learn
and
kind
of
to
pick
up
on
it.
D
So
naturally,
the
thought
process
behind
all
of
this
was
that
the
camel
the
apache
camel
project
needed
to
evolve
a
bit
to
accommodate
these
requirements,
mainly
to
be
able
to
work
with
serverless
and
microservices
architectures.
D
D
I
for
these
architectural
trends
and
changes,
and
just
like
with
the
quercus
project,
a
subproject
of
apache
camel
was
created
so
that
you
get
the
same
benefits
from
it
except
this
is
well
it's
native
to
native
to
kubernetes
as
well
and
specifically
for
serverless
and
as
a
result,
it
was
it's
called
cam.
Okay,.
D
So
what
exactly
is
come
okay
and
how
does
it
work?
Camouflage
runs
on
top
of
quercus.
First
of
all,
it
enables
developers
to
write
very
small
fast
java
applications,
like
you
just
saw
one
of
the
biggest
benefits,
I
think
is
that
a
camel
k
handles
camel
dependencies
for
you,
which
is
a
huge
win
and,
and
of
course
it
removes
also
the
need
to
configure
docker
or
s2i
before
deploying
to
openshift
or
kubernetes.
D
That
means
that
you
can
then
continue
to
focus
on
writing
integrations
and
just
using
the
pretty
much
already
really
simple,
camel,
dsl
or
domain
specific
language
no
need
to
worry
at
all
about.
You
know
how
am
I
going
to
package
it
reply,
redeploy
it
and,
and
that
kind
of
thing,
so
it's
straightforward
enough
to
make
a
kubernetes
native
integration
application
using
something
like
camel,
camouflage.
D
So
operators,
as
probably
everybody
here
knows,
but
operators
are
commonly
used
to
install
and
configure
applications
or
platforms,
whether
it
be
on
kubernetes
or
openshift
and
they're.
The
digital
version
kind
of
of
the
traditional
human
operator
that
they
used
to
just
do
all
of
this
manually,
so
they
would
have
to
to
install
dependencies
and
everything
for
for
applications,
whether
it
be
in
a
legacy
environment
and
that
kind
of
thing
just
making
sure
everything
is
in
place
for
the
application
to
be
able
to
run
and
to
do
its
job.
D
D
So
this
this
kind
of
list
here
is
just
to
give
you
a
general
idea
of
all
of
the
things
that
the
camouflage
operator
does
and
how
much
time
it
will
really
save
you.
D
And
actually
the
main
responsibility
of
the
camel
k
operator
is
to
look
for
camo
k,
integrations
deployed
using
camel
and
to
build
and
deploy
them
as
kubernetes
applications.
Just
as
straightforward
as
that,
and
all
of
that
is
really
possible
because
of
the
operator
sdk.
D
So
if
it
basically
performs
the
operations
on
the
kubernetes
resources
that
are
are
needed
to
run
the
the
camel
dsl
script
and
part
of
that
is
just
it.
It
defines
several
new
kubernetes
apis.
It
extends
the
the
custom
resources
so
in
other
words
the
operator
scans
your
application
and
creates
the
resources
that
you
need
in
the
cluster
automatically.
D
The
three
kind
of
main
concepts
of
camel
k.
Well,
we
already
discussed
mostly
the
come
okay
operator,
it's
basically
the
intelligence
that
coordinates
all
of
the
moving
parts
where
each
custom
resource
has
its
own,
like
dedicated
state
machine
that
orchestrates
their
faces
and
there's
also
the
runtime,
which
provides
the
functionality
to
be
able
to
actually
run
the
integration
and
then
there's
traits
I
traits.
D
I
won't
get
too
into
it's
a
more
kind
of
advanced
concept,
but
the
general
idea
is
that
you
can
just
you
can
customize
the
behavior
of
the
operator
and
the
the
runtime,
but
typically
for
most
people,
I,
the
the
defaults,
are,
are
sufficient,
but
just
so
that
you
know
it's
possible
for
an
experienced
user
to
to
modify.
D
So
to
get
started
with
cam,
okay,
first,
you
you
need
to
have
you
need
to
be
logged
into
a
cluster.
You
have
access
to.
You
have
to
install
the
camel
binary
put
into
your
system
path
and
you
need
to
run
camel,
install
to
install
to
install
it
and,
and
that
will
configure
the
cluster
for
you
with
custom
resource
definitions
and
install
the
operator
in
the
name
space,
and
that's
it
pretty
much.
D
So
writing
your
first
camel
k.
Integration
is
incredibly
simple.
The
first
thing
you
do
is
just
create
your
integration
file.
Tamil
k
currently
supports
a
bunch
of
languages
just
off
the
top
of
my
head
java,
kotlin
groovy,
xml,
even
javascript,
and
that's
quite
important
to
somebody
like
me,
because
I
have
to
use
javascript
more
often
than
not,
and
I
wanted
something
that
was
was
going
to
be
easy
to
work
with,
and
that
was
wasn't
like
a
joke,
and
so
this
was
a
very
low
barrier
to
entry
for
for
myself
as
well.
D
But
what's
probably
what's
really
cool
about
this.
That,
I
should
probably
say
is
that
it's
able
to
just
kind
of
materialize
and
started
the
integrations
within
just
a
few
seconds
that
helps
a
lot
during
the
development
phase
because
you
get
well.
You
know
you
get
like
immediate
feedback
on
code
and
you
can
make
you
can
make
changes
right
away.
D
So
you
may
be
asking
yourself
why
serverless?
What
is
the
big
deal
with
it?
Well,
not
here
to
convince
you,
yes
or
no,
but
some
of
the
the
touted
benefits
are
listed
here,
but
mostly
nobody
wants
to
have
to
be
predicting
their
workload.
D
D
I
come:
okay
provides
a
lot
of
features
when,
when
it's
run
on
k
native
and
if
you're
not
familiar
with
k
native,
I'm
not
going
to
go
too
into
depth
with
it,
but
I
it
basically
gives
you
serverless
capabilities
on
kubernetes,
and
there
are
three
major
areas
in
k
native
there's
the
the
build
area
that
provides
you
with
the
custom
resources
and
the
k
native
serving
area.
D
That's
the
part
that
helps
you
with
the
the
auto
scaling
and
scale
to
zero,
so
that
when
there's
no
traffic,
then
pods
or
containers
can
reduce
to
zero
replicas
and
then
there's
the
k
native
eventing
area,
which
I
think
is
is
more
specific
to
come.
Okay,
where
you
subscribe
to
a
channel
that
that
channel
pushes
events
towards
your
service,
it
just
gives
you
an
easy
way
to
trigger
your
functions
at
the
same
time
and
to
orchestrate
services.
D
But
I
think
the
thing
that
really
makes
tamil
kay
shine
here
is
that
your
service
also
just
receives
the
the
messages
through
incoming
cloud
events,
which
means
that
you
don't
have
to
actively
connect
to
the
broker.
So
the
service
ends
up
being
quite
passive,
and
actually
the
k
native
trait
automatically
discovers
the
addresses
of
native
resources
and
injects
them
into
the
into
the
running
integration.
D
And
if
you
have
an
existing
integration
already
camelca
integration,
then
it's
possible
to
run
it
as
a
k,
native
serverless
and
service.
D
So,
with
serverless
becoming
a
popular
architectural
style,
you'll
see
many
examples,
but
but
it's
important,
I
think,
to
remember
that
you
don't
need
to
use.
I
come
okay
for
serverless,
just
using
it
alone
or
even
to
deploy
a
quercus
app
is
very
common
and
useful
thing
to
do
and
to
not
get
overwhelmed
with
all
the
technologies
just
because
they
really
they
work
really
well
together,
doesn't
mean
that
they're
dependent.
D
Also
for
camelcase
the
possibility
to
set
up
monitoring,
and
so
that
can
be
done
for
both
the
integration
and
the
operator,
and
I
believe,
for
the
integration.
If
you
have
open
shift
already,
then
it's
the
prometheus
operator
is
already
deployed
as
part
of
the
openshift
monitoring
and
so
for
to
monitor
the
the
operator
you.
You
would
just
do
it
at
that
at
the
moment
that
you're
installing
camel
k
and
then,
of
course
you
can,
then
you
can
set
up
alerting
and
you
can.
D
You
can
visualize
collected
data
using
something
like
grafana
or
or
some
other
api,
consumer
and
and
quite
important
is
also
that
camouflage
helps
with
with
transformations.
So
adding
a
transformation
is
as
simple,
as
you
just
add,
a
line
or
two
to
your
to
your
camel
dsl
or
to
your
integration.
D
Something
like
converting
the
outgoing
body
to
uppercase
would
be
an
example.
You
would
just
add
it
to
a
step
and
you
can
have
as
many
steps
as
you
like.
D
So
I'll
be
doing
just
the
teensy
tiny
demo,
I
is
following
the
theme
of
adding
camel
sightings
that
will
get
eventually
end
up
in
the
kafka
topic
or
not,
eventually
immediately
end
up
in
a
kafka
topic.
This
time
I'll
be
reporting
my
sightings
through
through
telegram,
so
I've
already
created
the
telegram
bot,
but
but
it's
very
easy
to
do-
you
can
create
it
in
in
under
a
minute
or
or
so
I'll,
leave
the
link
in
the
slides,
okay,.
D
So
I've
already
kind
of
lazily
in
the
integration
here,
it's
it's
written
in
in
javascript
to
kind
of
change
it
up,
but
don't
get
too
jealous.
They
look
almost
exactly
the
same
in
java,
and
so
here
this
is
just
showing
that
where,
where
we're
starting
from
so
the
input
is
coming
from
a
telegram
bot,
we
would
add
our
author's
authorization
token
here
set
the
body
and
here
we're
marshaling
it
to
converting
it
to
json,
but
but
with
kafka,
which
is
going
to
be
where
we're
sending
it
to.
D
D
D
It
also
synchronizes
the
source,
changes
and
reloads
the
camel
context
automatically,
which
I'll
show
you
it's
doing
a
lot
at
this
point
and
building
an
integration
kit,
and
so
on.
So
from
here
we
go
to
the
the
openshift
console
we're
in
the
administrator
view.
Let's
go
to
developer
apology
view,
and
we
can
see
here
that
the
integration
is
running.
D
D
So
you
can
see
here
that
the
that
in
the
log
that
it
it
does
get
sent
right
away,
and
if
I
have
time
I
want
to
do
just
a
quick.
I
just
want
to
show
you
about
the
the
common
context
being
reloaded.
So
if
I
were
to
do
go
back
over
here.
D
D
That's:
okay,
that's
okay,
but
yeah.
You're
gonna
have
to
take
my
word
for
it.
It
will.
It
will
update
here
and
say
have
a
nice
day.
I
will
not.
I
will
not
stop
this
integration
now,
just
to
show
you
that
so
but
yeah.
Another
thing
is
that
you
can
also
just
get
the
status
at
any
point,
so
this
will
show
you
the
running
integrations
and
of
course,
I'm
not
going
to
do
it
right
now,
but
you
can
also
camel
delete
and
it
will
delete
whichever
integration
you
specify.
D
And
with
that,
I
will
just
leave
you
here
with
just
to
point
out
that
I've
left
a
quick
summary
and
some
on
some
resources
here
and
I'll
leave
it
to
my
colleague
now
to
continue.
B
So
I
hope
you're
seeing
my
screen,
so
we
have
seen
that
if
you
want
to
build
complex
software
architectures,
you
should
use
apache
camel
because
it
it
makes
very
easy
to
interconnect
things.
You
have
seen
that,
for
example,
creating
a
telegram
bot
is
really
really
stupid
in
four
lines
of
code.
You
can
create
a
telegram
bot
and
that's
it.
You
don't
need
anything
else.
Well,
you
need
to
add
the
code
for
your
logic.
B
Obviously,
if
you
add
commands
to
the
telegram
board,
you
will
have
to
generate
the
code
that
reacts
to
those
commands,
but
the
the
part
of
how
to
integrate
with
the
telegram
bot.
How
is
the
telegram
bot
api?
I
don't
know
we
don't
really
need
to
know.
We
just
rely
on
comment
if
telegram
updates
their
api
or
how
they
interact
with
the
with
the
bots.
B
You
don't
care
to
just
upgrade
camel
come
and
we
take
care
of
it.
It
will
work.
So
it
really
really
makes
it
very
easy
to
develop
things
that
different
third-party
components
interacting
it.
Then
we
saw
that
well
camel
is
running
over
java
java.
Sometimes
is
not
the
most
fastest,
the
not
the
best
for
serverless.
B
B
B
The
idea
is
that
well
for
creating
a
camel,
workflow
or
a
camel
integration.
You
usually
use
a
lot
of
different
pieces,
and
maybe
if
you
are
focusing
only
on
your
the
logic
of
your
use
case,
it
may
not
be
as
nice
as
it
could
be.
B
For
example,
if
you
are
a
scientist,
analyzing
camel
sightings
in
all
over
the
world
through
a
telegram
bot
and
a
twitter
api,
you
want
to
be
able
to
integrate
that
with
your
machine
learning
platform
or
whatever
easily,
and
you
don't
want
to
worry
about
how
to
interact
with
it.
Maybe
what
you
want
is
that
some
nice
developer
creates
some
gamelet
root,
snippet,
a
camelot
that
helps
you
create,
workflows
faster.
For
example,
imagine
you
have
an
api.
B
B
Well,
don't
worry,
you
can
create
a
camel
snippet
that,
on
a
transparent
way
simplifies
a
lot
all
these
calls
to
the
api
or
whatever
is
that
they
work
from.
Maybe
it's
not
one
step
it's
more
than
one
step,
but
you
can
simplify
it,
so
your
scientists
can
build
their
own
workflows
on
a
very
easy
way
with
not
so
many
building
blocks.
So
it's
like
a
meta
block.
B
This
is
a
very
native
concept,
I
think
so
it
has
camel.
Usually
has
this
consumer
producer
definition
depending
on.
If
the
end
point
is
reading
data
from
the
outside
or
writing
data,
and
here
cameras
have
a
similar
idea,
so
you
have
source
gamelets,
which
takes
data
from
the
outside
and
sync
amelettes,
which
write
data
somewhere.
So
it's
like
two
different
types
of
steps
and
you
usually
create
a
source,
kamelet
snippet
that
reads
the
data
and
a
source
comment.
B
Snippet
sync
comment:
snippet
that
writes
the
data
and
you
put
first
a
source,
then
a
sync
and
join
them,
and
you
usually
have
only
two
steps
on
the
on
your
workflow,
so
you
are
painting
a
source
with
a
sink.
B
The
idea
is
to
simplify
workflows
even
more
to
be
able
to.
At
the
beginning.
I
told
you
this
open
source.
If
there
is
some
connector
that
is
not
available.
Maybe
you
want
to
create
your
own
connector,
but
that
means
you
have
to
implement
it
in
java.
Maybe
you
are
not
java
developer.
Maybe
you
are
a
general
devops.
B
B
So
in
our
use
case
we
already
have
the
telegram
to
kafka
the
twitter
to
kafka,
and
now
we
want
to
collect
everything.
That
is
sent
to
kafka
through
telegram
and
twitter
and
store
it
in
an
elastic
search
and
that's
the
part
I'm
going
to
do
with
camelot.
The
elasticsearch
camelet
is
not
yet
it's
only
on
the
snapshot
version.
B
It's
not
on
the
release
version
that
it's
going
to
be
on
the
next
one,
because
it's
already
committed
so
it's
there
and
this
code
you
see
at
the
left
is
everything
you
need
to
write
on
your
file
to
connect
from
the
kafka
to
elasticsearch.
B
The
name
is
going
to
be
kafka
too,
as
binding,
then
you
can
you
define
the
source,
which
is
going
to
be
a
kafka
source
to
define
the
url
and
the
topic,
no
username
or
password
in
this
case,
and
then
the
sync,
which
is
where
I'm
going
to
write
the
data
which
is
going
to
be
elasticsearch,
and
here
I
have
to
define
the
url,
the
cluster
name,
the
index
name,
which
is
how
elasticsearch
defines
the
index
and
the
cluster
where
you
want
to
write
to,
and
then
the
username
and
password
and.
B
Of
course,
I'm
not
going
to
show
you
my
password,
but
what
I'm
going
to
do
is
just
use
this
file
with
the
proper
password
and
add
it
to
the
cluster.
This
is
the
current
topology.
B
B
B
B
So
if
I,
for
example,
retweet
something
with
apache
comment,
I
should
see
that
here
when
the
the
twitter
thing
queries
it,
which
I'm
not
sure
it's
going
to
be
here,
is
from
twitter.
I
have
this
boot,
so
let's
check
that
this
is
freely
on
elasticsearch.
Well,
I
have
here
things
from
last
week
when
we
were
testing
things,
but
if,
for
example,
I
search
telegram
latest,
I
see
a
camel
in
my
desk.
B
If
I
search
for
twitter
latest
the
one
I
just
retweeted
with
the
pineapple-
and
I
don't
know
if
we
have
been
already
one
hour
talking
so
maybe
it's
time
for
just
opening
questions
or
chatting
a
bit
about
this
about
camel.
We
can
talk
hours
and
hours
about
this
topic,
but
I
think
it's
better.
If
just
we
do
like,
we
did
just
review
a
bit.
The
state
of
the
art
of
camel,
starting
with
trusty
classic
traditional
camel,
moving
to
camel
quarkus
then
camel
cadence,
and
then,
let's
see
what
people
want
to
hear
about.
B
It
what
I
see
right
now
is
that
we
are
moving.
We
are
pushing
a
lot
in
the
serverless
side,
much
more
than
in
previous
years,
that
the
camel
work
was
very
active.
I
think
zinev
can
talk
there
more
about
our
activities.
B
Camille
parker's,
I'm
right
now,
working
on
the
camelot
side,
which
is,
as
I
said,
the
concept
was
born
last
year
at
the
end
of
last
year,
and
there
is
some
very
good
articles
from
nicola
explaining
where
this
comes
from
and
where
it
is
going,
and
we
only
have
like
20
camellets,
comparing
with
the
350
something
connectors
on
camel.
B
So
this
is
a
very
small
subset
of
camelets,
but
I
think
it's
it's
improving
a
lot
and
it's
going
to
be
very
visual
with
this
topology
view
in
openshift,
so
you
can
very
very
easily
connect
different
snippets
of
code
and-
and
I
don't
know
in
my
experience
or
what
I
feel
is
that
some
people
still
think
that
camel
is
not
as
easy
to
start
with.
B
Even
if
I
see
it
very
easy
now,
but
okay,
maybe
not
as
easy
to
start
with,
and
the
camelot
thing
is
going
to
push
a
lot
on
on
making
it
even
easier,
because
you
can
now
separate
your
development
team
that
can
create
the
gamelets
from
the
people
that
are
going
to
use
those
camelets
that
maybe
scientists,
maybe
analysts.
Maybe
I
don't
know
whoever
needs
to
build
integrations.
A
B
Yes,
so
we
have,
of
course
we
have.
Everything
is
open
source,
so
we
have.
B
Let's
put
it
on
the
chat,
so
we
can
tweet
it
later.
This
is
a
the
community
repository
of
all
the
camelets
that
are
there.
I
see
that
there
has
been
many
in
the
latest
days
even
hours
ago,
so
this
is
this.
Is
this
is
getting
a
lot
of
speed
in
increasing
more
camellias
every
day
and,
of
course
the
the
apache
camel
website
is
a
place.
You
should
go
first
to
see
any
kind
of
documentation
we
have.
B
A
Yeah,
well,
I
think
that
means
we're
gonna
have
to
have
you
guys
back
to
do
another
one
and
because
this
has
been
like
it's.
This
is
amazing.
The
the
demos
of
like
some
and
shining
the
light
on
these
future
integrations
and
what's
what's
available.
Now
is
pretty
amazing,
and
you
know
you
shouldn't
hesitate
to
get
involved
in
the
camel
universe
on
carcass
universes.
I
think
we've
that
its
time
has
arrived.
A
Did
you
want
to
add
any
final
words
around
where
you
see
things
going
these
days?
What's
next
for
your
adventures,.
C
For
me,
I'm
on
the
kama
carcass
side.
For
now
there
is
this
carcass
ii
that
x,
that
we
were
working
on
and,
of
course,
like
the
whole
team,
did
an
amazing
job
to
to
have
so
many
extensions
there
and
there's
still
some
work,
because
some
of
them
are
not
ported.
Now
on
native
and
there's
always
so
much
work,
and
if
you
want
to
get
involved
in
the
community
just
come
and
see
and
talk
to
us
on
the
mail
on
the
zulip
chat
and
yeah.
There's
lots
of
work
to
do.
D
Thanks,
I
think
well
recently,
1.4
was
just
released.
A
lot
of
the
focus
has
been
around
kamalitz.
Actually,
so
a
lot
of
what
maria
has
said
is
kind
of
where
come.
Okay
is
moving
towards.
We
just
added
the
the
bind
sub
command
as
well,
which
helps
you
to
use
cam
lids
and
directly
whenever
you
need
them,
but
but
yeah,
and
also
we're
exploring
kind
of
the
user
interface
side
of
things
like
seeing.
D
A
Well,
as
the
camelet
world,
spins
and
and
and
develops
we'll,
definitely
have
you
guys
back
and
maybe
even
a
walkthrough
of
creating
and
contributing
a
hamlet
might
be
a
great
thing,
a
future
topic
to
have
you
by
back
back
on
and
I'm
thrilled
with
with
the
depth
of
the
content
and
the
demos.
So
this
is
really
good.
It's
one
of
the
best
overviews
I've
seen
explaining
the
whole
camel
universe.
So
thank
you
very
much
for
this.
I'm
not
seeing.
A
Questions
in
the
chat
I'm
just
going
to
pause
and
see
if
chris
has
found
any
in
all
of
the
other
places
where
we're
testing
none
yet
so
you
have
answered
all
the
questions
or
you've
left
them
with
just
enough
mystery
that
they'll
go
off
and
explore
for
themselves.
So
thank
you
very
much
for
taking
the
time
I
know
you're
in
london
and
spain
and
france
and
time
zones
are
always
a
fun
thing,
but
we
totally
appreciate
you
coming
and
we'll
definitely
have
it
back
yeah.
This
is
a
big.
A
It
was
a
big
demo,
but
it's
a
great
thing
to
try
and
break
camel
down
into
these
pieces
and
parts
very
digestible.
So
thanks
and
we'll
talk
to
you
all
again
soon,
thanks
everybody
for
coming.