►
From YouTube: OpenShift Coffee Break: Camel K on OpenShift
Description
This session introduces the newly released Camel K in the Red Hat Integration portfolio and showcases a fun and easy way to relate use case. Internal teams in an organisation collaborate to collect data in an event-driven mesh of applications on Openshift, combining the new low-code building blocks of Camel K with AMQ Streams.
A
And
here
you
are
welcome
everybody
for
oh
hold
on
a
second
just
as
usual.
What
happens
here?
We
are
sorry
guys
I
was
having
two
two
video
feeds
at
the
same
time,
and
then
they
were
playing
into
my
in
my
headset.
How
are
you
guys
welcome
to
coffee
break?
Opens
your
coffee
break.
We
have
today
fabio
the
usual
host
hello.
A
And
we
will
be
talking
about
camo
kay
but
of
course,
on
openshift
and
have
a
cup
of
coffee
as
usual.
This
is
my
second
one
for
today.
I
don't
know
about
you
guys,
and
I
wanted
to
first
of
all
ask
bruno
to
introduce
himself
yeah.
C
Yeah,
hello,
everyone
so
yeah
very
happy
to
be
here
thanks
everyone
for
joining
us
for
coffee
time.
I
brought
my
coffee,
but
you
know
I'm
not
drinking
from
the
mug,
I'm
just
taking
my
coffee
maker
straight,
and
you
know
I'm
just
drinking
from
here
from
time
to
time,
burning
your
hands
as
well,
great,
so
yeah.
So
my
name
is
bruno.
C
I've
been
in
red
hand
for
about
six
years
now
you
know
my
world
is
integration.
I've
done
a
lot
of
camel
work,
especially
you
know,
trying
to
help
customers
go
into
customers,
helping
them
building
proofs
of
concept
and
prior
to
my
life
in
red
hat,
still
was
all
in
the
integration
space,
especially
on
non-open
source
technology.
So
I
you
know,
I
quite
have
a
good
background
on
different
products
and
technologies,
and
that
gives
me
a
good
perspective,
and
actually
you
know
you
know
a
while
a
long
time
ago.
C
I
did
experience
for
the
first
time
camel
and
I
was
very
attracted
by
all
its
power,
and
today
we
will
see
the
latest
evolution
of
camel
and
yeah.
Hopefully
you
will,
you
will
notice,
you
know
how
powerful
it
is.
A
Excellent-
I
just
have
one
question
I
mean,
maybe
for
for
the
benefit
of
those
that
no
openshift
that
may
be
familiar
with
integration
in
general.
Is
there
a
difference
between
without
the
game,
fuse
fuse
and
camel.
C
Right
so
fuse
would
be
the
product
name.
The
traditional
product
name
of
of
the
red
hat
that
incorporates
a
camel
right
so
so
camel
would
be
the
the
star
feature
of
of
a
fuse
right.
So
you
would
implement
integration
flows
and
applications
using
fuse,
but
the
core
feature
would
be
camel
in
its
core.
Now
we
have
evolved
from
fuse
and
our
next
step
is
we
just
call
it
a
camel
right
so
camel
on
quarkus
or
camel
k,
so
we
are
leaving
behind
the
the
product
name.
C
The
brand
name
fuse,
because
I
think
customers
prefer
to
relate
directly
to
the
upstream
project,
which
is
really
you
know,
like
I
say,
the
the
main
feature
of
of
the
framework.
C
Yeah
absolutely
yeah
java
based,
you
know
it
has
a
ton
of
functionality.
One
of
the
main
characteristics
of
this
evolution
is
that
fuse
used
to
be
running
on
camel
2
right
all
in
java,
and
then
there
was,
you
know,
a
new
version
released
from
apache
camel,
camo
3
and
basically
everything
that
we
are
packing
now
is
based
on
version
3
and
and
still
is
all
java
based
just
simply
because
of
the
incredible
heritage
of
the
functionality
that
java
provides
really
right.
C
So
the
only
let's
say
thing
that
happened
during
the
last
years
is
that
because
everyone
was
going
to
containers
java
had
a
you
know
a
big
memory
footprint
but
then
corkus
sold
that
right
made
java
relevant
again
and
that's
what
camel,
k
and
kamufo
karkos
is
benefiting
from,
and
so
we
can
keep
all
that
super.
You
know
functionality
all
that
ton
of
functionality
with
camel
and
java.
C
Great
a
ton
of
stuff
so
yeah,
I'm
going
to,
I
can
start
we'll
start
presenting
not
yet,
though
I
want
to
actually
set
some
temporary
tokens
and
stuff,
but
basically
I
want
to
introduce
camel
k,
because
I
do
understand
that
last
session,
like
a
month
ago
or
something
kevin
did
introduce
camel
for
quarkus,
so
the
base
of
camel
k-
and
he
explained
you
know
that
transition
or
that
benefit
of
running
camel
on
top
of
quarkus
and
today
what
we
are
doing
is
focusing
on
camel
k.
C
So,
like
I
say
it's
based
on
camel
quarkus,
it's
built
on
top
of
camel
camo
corkus.
The
main
difference,
though,
is
that
camel
quarkus
you
know
can
be
pro
deployed
both
on
rail
or
on
openshift,
so
you
can
take
it
and
exploit
the
full
capabilities
of
camel
while
camel
k
is
purely
for
kubernetes
right.
So
you
you
need
a
kubernetes
environment
and
the
because
the
the
the
aim,
the
objective,
is
to
provide
a
very
easy
life
for
the
developer.
C
We'll
see
that
as
I
present
right
so
basically
I
will
introduce
camel
k
and
then
we
have
a
long
demo
packed
of
features
and
and
stuff.
So
hopefully
that
will
give
you
a
good
taste
of
the
different
bits
and
pieces.
First,
if
you
don't
mind,
while
you
guys
perhaps
discuss
a
couple
of
things,
let
me
just
set
some
temporary
tokens
because
you'll
see
that
we
need
those
four
for
the
demo
later
on.
Bear
with
me.
A
Fabio
just
a
question
for
you
and
it's
just
your
opinion,
of
course.
So
hey
can
okay
and
we
will
see
what
what
it
brings
us
to
the
table
and
but
if
it
is
a
designed
on
purpose
for
kubernetes
operation,
do
you
think
he
could
address?
Also
the
integration
needs
at
the
edge.
Considering
that
now
openshift
can
run
on
edge
servers.
B
B
Runtime
for
the
in
general
for
java
application,
I
think
probably
we
could
now
address
integration.
Also.
Integration
needs
also
on
the
age.
B
Least
10
seconds
of
opposite
coffee
break
on,
but
if
we
think
probably
about
very
easy
integration
pro
activities
on
the
edge
and
are
not
very
complex
ones,
probably
yes,
I
think,
is
something
we
can
start
to
address.
If
we
compare
this
with
the
red
hat,
fuse,
let's
say
old
style.
That
doesn't
mean
it
was
not
good,
but
it
was
not
good
for
this.
A
For
pushing
coffee
break
demos
are
live,
and,
and
so
we
had
to
make
sure
that
bruno
had
all
the
tokens
set
up
so
that
it
worked
smoothly.
Ish.
B
Just
one
advice
bruno:
if
you
could
please
enlarge
your
the
presentation,
because
sometimes
the
the
character
can.
C
All
right
so
should
I
go
ahead
then
please.
C
Is
here,
okay,
so
yeah
like
I
say,
thank
you
very
much
for
joining
us
for
coffee,
so
very
quickly
what
the
session
will
contain.
So
an
introduction
to
kamal
k.
C
We
say
that
I
need
to
explain
certain
building
blocks
because
those
will
be
used
on
the
demo
and
you
need
to
understand
and
probably
will
help
you
to
follow
better,
and
then
we
have
that
demo
that
I
will
introduce
as
well,
which
is
about
you
know,
collaboration
between
different
teams
in
a
in
a
company,
we'll
see
that
so
that
will
you
know
it's
very
dense,
35
minutes.
C
I
need
to
say
plenty
of
things,
but
of
course,
if
there
are
questions
or
we
need
to
comment
on
something
we
can
stop
and
then
I
will
try
to
adjust
to
the
time
that
is
left
but
he's
very
packed,
I'm
going
to
speak
fast,
but
hopefully
you
know
you'll
be
able
to
follow
great.
So,
let's
introduce
camel
k
and
I
want
to
start
with
these
three
custom
resources,
because
this
is
the
openshift
tv
channel,
I'm
assuming
that
more
or
less
everyone
is
familiar
with
with
openshift
and
with
custom
resources.
C
So
these
are
the
resources
that
we
can
push
to
the
environment
and
camel
k
has
three
key
resources
that
I'm
going
to
be
talking
about,
or
referring
to,
I'm
not
going
to
explain
them
now.
So
I
just
want
you
to
remember
these
three
bits,
so
the
first
one
camel
k
integrations
is
where
we
are
going
to
push
the
definitions
of
what
I'm
going
to
explain
as
follows:
right,
so
that
will
go
on
the
first
one.
C
So,
basically,
what
is
camel
k
well
camel
k
comes
from
the
open
source
project,
apache
camel
that
is
well
known
as
the
swiss
knife
of
integration.
It
is
an
open
source
project
that
does
all
things:
integration
and
yeah.
It's
probably
the
most
popular
integration
framework
and
camel
k
is
based
on
it
right.
C
So
the
k
comes
from
kubernetes,
meaning
that
it
runs
natively
in
kubernetes,
and
so
basically
it
is
a
platform
where
we
can
directly
run
integrations
on
openshift
and
kubernetes,
with
serverless
ready
capabilities
if
you
have
those
on
the
environment.
So
it's
not
a
must
it's
just
simply
that
if
you
have
them,
then
you
can.
You
can
benefit
from
those,
and
we
saw
that
on
on
kevin's
presentation.
So
please
jump
to
that
session
earlier.
C
You
know
and
to
get
more
details
today
we
are
not
going
to
focus
so
much
on
serverless,
because
that
was
covered
and
understood
more
or
less,
but
so
that
you
know
camel
k
deals
with
that
very
nicely
as
well.
So
it
is
our.
You
know:
architecture,
architects,
architected
around
kubernetes,
custom
resources
and
operators,
and
you
know
it
kicks
started
on
2018.
C
So
what
does
camel
k
bring
to
the
table?
Well,
it's
it's
a
super
rich
connectivity
and,
of
course
this
is
inherited
from
apache
camo.
It
has
literally
hundreds
of
connectors
and
you
can
get
those
to
connect
to
any
cloud
service
or
any
traditional
endpoint
like
you
know,
http
or
databases,
or
things
like
that.
C
Then
it
has
these
characteristics
to
run
on
serverless,
so
serverless
integration
with
the
ability
to
you
know
scale
to
zero
and
kick
off
very
quickly,
and
this
is
of
course
thanks
to
that
super
efficiency
that
the
quarkus
engine
provides,
because
you
know
again
it
takes
that
the
java
footprint
to
a
minimum
right,
so
super
super
small
and
and
because
it
consumes
very
little
memory.
Of
course,
you
know
a
kick
starts
super
quickly
and
that's
what
you
need
for
serverless.
C
Then,
of
course
it
runs.
You
know
in
a
cloud
native
way,
enterprise
integration
patterns
right
so
eips.
So
this
is
what
this
is,
where
camel
excels
at
giving
you
like
these
sort
of
building
blocks
to
solve
any
use
case
in
integration
right,
so
those
are
well
known
and
you
have
all
the
pieces
that
you
need
in
camel
to
do
that
very
easily.
C
It
does
seamless
integration
with
capabilities
on
the
frame
on
the
on
the
environment
on
the
platform
you'll
see
that.
But
if
you
have,
for
example,
k
native
or
a
kafka
cluster,
you
know
it
will
the
operator
will
wire.
You
know
your
application
to
it
automatically.
C
So
super
easy,
we'll
see
that
on
the
demo
and
then,
of
course
that
provides
that
that's
one
of
the
aspects
of
this
developer
joy,
where
you
don't
need
to
deal
with
those
things
and
and
also
that
camel
k,
really
aims
at
the
developer.
It
provides
a
way
of
implementing
the
integration
without
having
to
handle.
You
know
a
project
with
a
bomb
file
with
a
pom
file
and
dependencies,
and
all
that
you
know
it's
going
to
handle
the
dependencies
for
you.
C
So
all
you
need
just
in
the
source
file
and
you
have
that
live
reload
mode,
so
that
developer
mode
that
actually
we
will
see
as
well
under
the
one
that
kevin
also
showed
case
last
time
so
yeah.
This
is
sort
of
the
magic
formula
right.
This
is
what
camel
k
is
based
on
so
apache
camel
quarkus
the
engine
and
the
operator
sort
of
the
robot
that
will
assist
us
in
so
many
ways
to
rule
out.
C
You
know
our
definitions
and
you
know,
deploy
containers
and
run
them,
and
so
on
so
yeah
powered
by
the
next
generation
building
blocks
and
from
a
developer
point
of
view.
This
is
really
what
it
takes
to
fully
deploy
a
service.
So
it's
three
steps.
So
what
you
see
here
is
that
the
first
thing
that
the
developer
needs
to
do
is
just
create
one
source
code
file.
In
this
case,
we
have
an
example
done
in
javascript,
but
you
there's
all
the
languages
as
well.
C
It
is
exposing
an
http
endpoint
for
a
service
called
roll
die
and
when
requests
come
in,
then
it
randomly
picks
a
number
from
one
to
six
and
returns
that
right.
So
when
the
developer
creates
these
three
lines
of
code,
then
it
can
on
step
two
execute
this
with
the
camel
client
with
k
come
up
with
k,
so
you
just
say
comma
run
and
then
the
name
of
the
file,
and
then
that
is
pushed
to
the
environment
and
the
operator
does
the
rest
right.
It
sees
immediately.
C
You
have
pushed
a
new
custom
resource,
an
integration
resource.
One
of
you
know
the
first
one
that
I
showed
on
the
first
slide
and
that
will
deploy
you
know
the
the
container
and
it
will
run
these
web
service
straight
away
and
fully
accessible
and
you
can
fire
requests
and
you
will
obtain
results
immediately
so
right.
So
that
was
the
quick
intro
for
you
know
camel
k.
C
But
then
we
have
these
constructs
that
are
part
of
camel
k
that
are
super
important
as
well,
and
very,
very
handy,
so
you'll
see
so
the
first
one
is
kamlets,
so
cam.
Let's
would
be
that
second
custom
resource
on
my
first
slide,
and
so
what
are
comelets
so
hamlets,
as
you
can
see,
is
actually
it
comes
from
the
the
you
know,
putting
together
camel
root,
snippets
right,
you
put
it
together,
and
that
gives
you
camelets
and
so
a
camera
is
a
kubernetes
resource
that
hides
some
camel
logic
right.
C
So
it's
a
sort
of
a
small
recipe,
a
small
camel
recipe
that
you
can
push
as
a
custom
resource,
and
then
you
can
reuse
that
a
little
bit
everywhere
right,
and
so
it
does.
Some
integration
steps
it's
a
passive.
It
sits
passively,
but
then,
from
camel
k
on
your
code,
you
can
consume
hamlets
as
one
common
instruction
presented
in
a
simple
interface.
C
So
you
see
here
a
camel
k
example
where
the
starting
activity
is
a
hamlet,
and
in
this
case
we
are
choosing
one
which
is
the
chuck
norris
source
that
basically
fetches
out
in
the
internet.
There's
a
service
where
you
can
that
you
can
reach
that
gives
you
jokes
about
chuck
norris
and
if
we
print
out
the
joke
itself,
so
these
two
liners,
when
you
push
this
onto
the
environment,
will
give
you
a
joke
every
second
right.
C
So
as
simple
as
that,
this
is
how
you
reuse
cabinets
and
there
are
more
or
less
three
types
of,
of
course.
So
you
have
sources
and
syncs.
So
you
can
see
this
on
both
sides
and
then
middle
actions
that
are
there.
So
basically
sources
would
fetch
data.
You
can
do
little
transformations,
little
actions
and
then
syncs
so
to
push
data
to
systems
out
there
right,
and
so
so
you
have
these
three
types
of
camlets
now
what's
interesting
about
cabinets.
C
Is
that
camel
k
provides
a
full
catalog
for
you
right
out
of
the
box,
so
you
have
dozens
and
thousands
of
these
cabinets
available
to
you.
So
the
developer
can
go
to
this
catalog
pick
what
they
need
they
can
choose
and
then
use
that
on
their
camel
k,
definitions
and
what's
great
as
well-
and
this
is
very
characteristic
in
in
camera-
is
that
you
can
always
extend
the
functionality.
C
Is
that
if
you
don't
find
what
you
want
on
that
catalog,
you
can
create
your
own
ones
right,
so
you
can
effectively
create
your
own
mini
catalog.
This
is
very
interesting,
for
example,
for
an
organization
that
sometimes
they
have
some
custom
manipulations
for
some
data
and
some
things
that
they
do
only
them
or
that
are
private
or
whatever,
so
they
can
form
this
mini
private
catalog,
and
then
they
can
reuse
that
internally
right.
C
So
so
that's
the
great
thing
about
tablets
and
on
top
of
that,
so
these
hamlets
not
only
are
great
for
developers
because
they
can
quickly
accelerate.
On
these.
You
know
definitions
of
integrations,
but
also
the
non-camel
user,
so
the
regular
kubernetes
user
without
having
any
knowledge
or
experience
with
camo
without
having
to
know
the
syntax
or
or
the
language
itself.
They
can
actually
use
these
resources
to
create
integration
flows,
so
you
would
wonder
how
they
can
do
that.
C
So,
let's
see
how
that
is
possible,
and
that
would
be
when
hamlet
by
that
this
is
how
hamlet
bindings
come
in
play.
This
is
a
way
to
facilitate
that
ability
for
a
regular
user
to
define
integration
flow,
so
covenant
bindings.
This
is
the
third
resource
that
I
introduced
on
the
first
slide,
the
one
on
the
right
and
basically.
A
Can
we
can
we
ask
a
couple
of
questions
from
the
pica
gallery
first,
so
you
you
mentioned
the
catalogue
it
and
and
in
hamlet,
our
cabinets.
More
code,
time
constructs
or
runtime
constructs.
C
Correct
so
these
are
building
blocks,
it's.
They
are
building
blocks
that
sit
passively,
so
you
define
hamlets,
you
just
make
them
available
in
the
environment
and
they
are
just
simply
for
us
to
pick
and
choose
and
use
them
in
camel
k
or
incumbent
bindings
right.
So
we'll
see
that
actually
on
the
demo,
so.
A
C
It's
automatically
yeah,
so
the
so
you
have
the
catalog
automatically
deployed
when
you,
when
you
prepare
camel
k
and
you
have
the
operator
when
the
platform
is
there,
you
will
have
the
catalog
immediately
available
and
then,
if
you
want
to
extend,
create,
create
your
own
catalog,
then
all
you
have
to
do
is
to
define
a
bunch
of
yaml
files.
You
know
each
yamaha
is
a
hamlet
and
then
push
that
to
the
environment
and
those
will
be
coexisting
with
the
rest
of
the
catalog.
So.
C
No,
so
the
upstream
is
the
the
catalog
that
comes
out
of
the
box
with
camel
k
and
then
it's
up
to
you
to
have
your
own
collection
and
you
can
do
whatever
you
want.
So
you
can,
you
can
even
try.
You
can
donate
that
to
the
to
the
upstream
project
right
to
camel,
I'm
sure
they
will
be
welcome
or
otherwise,
if
an
organization
doesn't
you
know,
if
it's
just
internal
matters,
then
they
would
keep
it
keep
it
for
themselves.
B
Bruno
one
more
question
regarding
this
very
interesting
concept
regarding
camelot,
if
I
understood
well,
let's
say
that
we,
a
company,
has
already
a
bunch
of
beans,
something
like
that
that
do
specific
transformation
or
actions
that
they
want
to
reuse
is
kamelet
the
right
tool
to
yeah.
C
Absolutely
so
it's
it's
a
great,
it's
a
great
way
to
actually
construct
reusable
components
for
for
reusability.
It's
a
great
feature:
there's
lots
of
functionality
that
camel
provides
for
reusability
right,
but
for
for
a
kubernetes
environment.
This
is
great
as
well.
Okay,
especially
for
camel
k
right.
So
so
you
have
you
have
that
already
for
camel,
but
then
it's
even
easier
when
using
camel
k
to
define
kamlet's
to
yeah
to
favor
reusability,
and
you
know,
make
a
developer's
life
a
lot
easier.
C
Okay,
thanks
all
right,
let's
press
ahead,
because
there's
tons
to
explain
right.
So
you
know
this
is
how
you
typically
bind
comets
together
right.
So
a
hamlet,
binding
would
typically
look
like
this,
so
you
generally
from
that
catalog
you
pick
a
source
and
a
sync
and
you
bind
them
together
and
effectively.
You
can
see
that
you
can
create
a
data
flow.
C
That
is
an
anything
to
anything
right,
and
so
so
you
can
easily
integrate
and
this
with
no
camel
knowledge,
because
these
cabinets,
all
you
have
to
do,
is
just
to
configure
the
parameters
and,
as
you
can
see,
you
could
then,
for
example,
fetch
tweets
and
then
forward.
Those
to
kafka
or
to
telegram
with
in
a
very
easy
manner
and
the
same
instead
of
you,
know
binding
to
another
cabinet.
You
can
bind
to
actually,
for
instance,
a
kafka
platform.
C
Imagine
here
a
kafka
platform
in
the
middle
being
the
center
of
everything.
Kind
of
you
have
their
applications
that
are
producing
events
and
then
are
consuming
kafka
events.
Well,
you
could
actually
supercharge
this
platform
with
the
super
connectivity
of
all
these
hundreds
of
connectors
and
tablets
from
from
camel.
So
you
know
any
cloud
service
or
any
endpoint.
You
could
retrieve
that
data
and
push
it
into
the
kafka
platform,
and
these
applications
would
benefit
from
that
and
the
same
you
know
on
the
other
direction.
C
So,
basically
we
could
actually,
you
know,
fetch
any
of
these
streams
and
then
push
that
to
outsystems
or
cloud
environments
out
there
right
and
so
right
for
today.
You
know
I
support
today
we
natively
so
camel
k
very
easily
can
bind
to
k
native
eventing.
So
that's
for
you
know,
serverless
applications
to
emit
cloud
events
or
to
a
kafka
platform
to
emit.
You
know
kafka
events.
B
C
Is
how
you
define
a
hamlet,
binding,
very
easy,
so
it's
just
a
yaml
file.
You
just
define
it's
a
common.
You
just
say
that
it
is
of
kind
hamlet,
binding.
You
give
it
the
name,
then,
from
the
catalog.
You
pick
a
source
and
then
you
pick
a
sink
right
and
in
this
case
we
are
picking
this
again
chuck
norris
source.
So
we
are
fetching
jokes
and
we
are
binding
them
to
the
kafka
platform
and
effectively
every
second.
C
We
would
be
sending
a
joke
to
as
a
kafka
event,
and
so
this
is
the
summary
really
of
you
know.
Remember
those
three
custom
resources
we
spoke
on
on
first
slide,
so
the
first
one
is
a
camel
k
definition.
So
the
club
cloud
native
camel
this
is
more
oriented
for
experienced
developers.
This
is
where
they
can
use
the
full
camo,
dsl
and
define
integration
flows
come
let's
sit
a
little
bit
in
between.
C
They
are
passive,
they're,
just
a
catalog,
and
then
you
can
use
hamlets
in
your
communicate
definitions
or
in
your
complete
bindings
and
remember
tablet.
Bindings
are
no
code,
configurable
flows
right,
so
anyone
any
user
without
any
knowledge
of
camel
can
actually
enable
these
integration
flows
immediately.
So
yeah,
that's
kind
of
the
intro,
and
with
that
we
get
to
the
demo
part,
and
now
you
know
all
the
building
blocks
and
happy
to
carry
on.
Are
we
okay
with
that.
B
Yes,
please
actually,
I
have
one,
but
I
think
it's
better
to
move
forward
with
the
demo
and
I
will
do
my
question
at
the
end
of
the
day.
C
Yes,
I'm
going
to
need
to
go
a
little
bit,
quick
all
right,
so
this
is
a
demo
where
we
want
to
see
some
collaboration
between
teams
in
an
office.
So
this
is
really
the
use
case.
We
have
different
departments,
but
this
use
case
is
driven
by
the
strategy
team
right.
C
So
the
strategy
team
needs
to
decide
the
next
moves
of
the
company,
but
for
that
they
have
some
questions
for,
for
which
they
don't
have
the
answers
and
they
need
to
reach
at
other
departments
to
get
some
help
some
input,
and
that
would
allow
them
then
to
take
those
decisions
and
carry
and
carry
forward
and
basically,
typically,
what
they
do.
This
strategy
team
is
that
they
get
into
a
video
call,
and
then
they
create
a
google
sheets
document
and
then
all
their
concerns
and
all
their
questions.
C
They
put
it
in
them
in
there
and
then
they
assign
a
possible
team
owner
that
supposedly
can
help
them
with
that
particular
concern
right,
and
so
the
aim
of
the
demo
is
that
we
provide
these
integration
capabilities
to
introduce
that
sort
of
automation
where
we
will
be
collecting
all
the
input
and
answers
from
these
different
departments
and
somehow
automatically
we
push
that
to
the
google
sheets
document
and
so
that
you
know,
there's
no
need
for
chasing
anyone.
You
know
in
person,
but
we
can
make
these
process
very
efficient.
C
So,
of
course
we
are
going
to
enjoy
the
nice
part
of
the
demo
where
things
flow
very
naturally,
and
hopefully
everyone
will
enjoy
that
if
everything
goes
well,
but
obviously
we
need
to
have
prepared
certain
things
for
this
to
run
well,
so
of
course,
a
kubernetes
environment.
C
So
we
have
an
instance
of
openshift
there
for
us
and
of
course
we
need
to
deploy
the
camel
k
platform
with
the
operator
that
will
be
watching
all
of
our
moves
and
then
react
to
that
and
then
pull
out
and
deploy
and
run
containers
with
our
integrations.
C
We
have
the
kafka
platform,
because
this
all
is
going
to
be
event
driven.
C
We
are
going
to
be
producing
kafka
events,
we
are
going
to
be
consuming
kafka
events
and
we
need
to
prepare
some
access
to
the
apis
of
google
sheets
because
of
course
we
want
to
consume
those
questions
so
that
we
can
distribute
them
and
we
will
distribute
them
via
email
right
to
just
simulate
that
we
are
in
an
office
and
and
so
that
we
interface
with
the
with
the
communications
with
the
different
departments,
and
then
we
answer
and
so
on
and
finally
as
well,
some
access
to
google
drive
because
we
want
to
generate
some
report
out
of
all
this
activity
and
then
push
some
documents
in
google
drive
all
right.
C
So
with
all
of
that,
we
can
kind
of
start
there
faded.
In
the
background,
we
have
all
the
stages,
so
it's
packed
of
of
different
bits
and
pieces.
So
I
will
walk
you
through
slowly,
so
that
you
can
follow
so.
The
first
one
is
the
is
what
I
call
stage
one
right,
so
we
are
going
to
be
implementing
stage
one,
and
basically
this
first
step
is
all
about
fetching
those
questions
from
google
sheets
and
then
pushing
those
as
kafka
events.
This
is
a
pretty
simple
integration
flow.
C
We
don't
need
to
do
much
so
for
this.
We
don't
need
to
actually
do
any
development.
All
we
need
to
define
is
a
hamlet
binding,
so
we
just
pick
from
the
catalog
the
right
source.
Then
we
bind
to
the
kafka
cluster
and
then
we
push
this
to
the
to
the
topics.
So
in
this
case
the
topic
is
called
questions
right.
So
let's
jump
and
see
how
we
are
going
to
do
that,
I'm
showing
right
now,
the
openshift
environment.
You
do
see
here
the
kafka
cluster.
So
this
is
the
kafka
cluster.
C
It's
operator,
I'm
not
going
to
get
into
the
details
of
that,
but
it's
there
for
us
and
we
have
the
camel
k
operator
that
is
going
to
react
to
you
know
all
the
definitions
that
we
push
to
the
environment.
We
do
have
here
the
google
sheets,
you
know
a
document
with
the
questions
and
the
different
team
owners
and
everything.
So
you
see
that
the
answers
are
to
be
filled
up
populated
and
well.
C
Then
I
have
a
mail
server
that
I
have
deployed
in
a
different
name:
space,
I'm
not
going
to
get
there,
but
there's
on
on
that
other
space.
We
have
a
mail
server,
and
with
that
then
we
can
look
at
the
yaml
definition.
So
if
I
go
to
vs
code,
so
this
is
where
I
have
all
the
resources
and
everything
we
can
look
at
the
complete
binding
for
stage
one
right.
So
this
would
be
the
yaml
definition
for
that
first
stage,
you
see
that
this
is
of
kind
complete
binding.
C
I
give
it
a
name
stage.
One
and
all
I've
done
is
associated
a
secret,
and
this
secret
is
just
to
solve
the
credentials
for
accessing
the
google
apis.
But
then
I
pick
here
the
source
cablet.
So
you
see
this
is
of
type
tablet.
This
is
the
source
and
all
I
need
to
do
is
just
fill
up.
These
parameters
so
configure
the
hamlet
with
the
temporary
token.
So
these
are
the
ones
that
I've
generated
right
before
the
starting
the
session,
and
then
we
bind
these
to
the
kafka
topic
questions
right.
C
So,
as
you
can
tell
here,
we
are
not.
We
don't
need
to
define
the
connectivity
details.
The
operator
knows
how
to
do
that,
because
he's
aware
about
the
kafka
cluster
and
then
basically,
all
we
have
to
define
is
just
give
the
name
of
the
topic
that
we
want
to
push
this
to
right.
So
with
this,
we
can
then
push
this
to
the
environment,
this
yaml
definition.
C
So
I'm
going
to
show
my
console
there
and
I
want
to
show
you
the
instruction
of
that
so
scripts
demo
stage
one
and
look
at
the
instruction
all
we
are
doing
is
using
the
oc
client
and
then
pushing
the
yaml
file
definition.
So
the
one
I've
shown
to
you
to
the
environment.
That's
it
right!
So
let's
do
that.
So
we
say
scripts
demo
stage
one
and
when
I
do
that,
if
we
look
at
the
environment
it
should
more
or
less
immediately
launch
the
the
the
pod.
C
So
this
is
the
operator
that
has
seen
the
yaml
definition
and
then
it
has
rolled
out.
You
know
an
integration
that
complies
with
that
definition
that
we
have
given
I'm
just
going
to
check
visually
that
this
has
properly
connected
to
google,
because
sometimes
with
the
credentials
or
the
tokens.
So
it
might
go
wrong,
but
it
seems
all
clean.
It
seems
like
it
has
successfully
connected.
So
I'm
happy
with
that
oops
and
we
go
back
to
the
yeah
to
the
topology
right.
C
So
the
first
stage
you
know,
I
think,
is
considered
successful.
We
will
validate
later
on
that
effectively
we
got
those
kafka
events
pushed
in
so
right.
So
that's
done
so.
We
have
the
first
stage
completed
and
now
we
jump
to
the
second
stage
right
and
on
the
second
stage
we
want
to
consume
those
kafka
events
and
send
different
emails
to
the
different
distribute
the
questions
to
the
different
departments.
C
So
here
is
not
as
trivial
as
a
combat
binding,
because
we
need
to
do
some
content-based
routing.
We
need
to
look
into
the
details
of
that
payload
that
event
and
then
see
the
team
owner
and
then
send
it
there.
So
for
that
we
will
use
the
camel,
dsl
and
and
well
we
are
going
to
actually
you
know
pretend
to
be
developers,
because
that's
that's
actually
what
I
want
you,
you
need
to
understand
that
camel
k
is
really
aimed
for
developers
to
keep
them.
C
You
know
an
easy
life
and
you
know
the
platform
is
there
for
us,
but
we
shouldn't
you
know
it
should
resolve
for
us
all
those
tasks
about
deploying
rolling
out
containers
and
everything
we
can
with
camel
k.
We
focus
on
the
business
logic
all
right.
So
let's
look
at
that
definition
so
stage
two,
and
so
this
would
be
the
camel
dsl
and,
as
you
can
see,
it's
using
the
camel
language,
but
here,
of
course,
I'm
not
expecting
the
audience
to
be
familiar
with
the
syntax
and
the
language.
C
So
I'm
going
to
guide
you
through
that,
but
actually
what
we
are
going
to
do
is
delete
the
whole
definition
and
doing
it
ourselves,
but
you
would
think
that
is
going
to
take
us
some
time,
but
actually
we
are
doing.
We
are
going
to
do
this,
help
with
a
visual
editor
that
I'm
going
to
open
up.
So
this
is
an
editor
I've
been
working
on
for
the
past
time
and
it
is
going
to
accelerate
us
and
render
for
us
that
camel
dsl
right.
C
So
the
first
thing
it's
called
camel
designer,
I
think
and
and
basically
well.
The
first
action
for
us
to
do
is
really
to
choose
the
consumer
right.
So
we
want
to
consume
on
that
second
stage:
the
kafka
events,
so
I
click
there
and
I
have
the
starting
activity,
which
is
a
kafka
consumer,
and
all
I
need
to
define
here
is
really
the
topic
that
I
want
to
obtain
the
events
from
so
questions
right
so
and
I'm
not
going
to
do
much.
C
I
only
want
to
trace
the
payload
of
this
and
so
just
to
check
that
everything
is
working
properly.
So
I'm
going
to
say
demo
and
I'm
going
to
print
the
payload
of
that
event
that
we
got
from
kafka.
Okay,
so
I
will
stop
there.
C
As
you
can
see,
it
has
generated
some
camel
dsl
there
for
us,
and
so
I'm
going
to
save
this
save
and
what
I'm
going
to
do
now
is
actually
push
these
to
the
environment
and
see
if
we
are
consuming
the
kafka
events
and
they
should
they
should
be
there.
C
So
again,
I'm
going
to
show
you
the
command
for
that.
So
demo
stage
2,
and
we
see
that
we
are
using
here,
the
common
client
so
with
k,
so
we
say
camel
run.
We
give
a
name
to
the
integrations
stage
two
and
then
we
pass
the
xml
definition.
So
this
one
that
we
have
created
and
we
just
need
to
specify
a
dependency
there
for
the
json
manipulation
that
we
will
do
later
on.
C
And
then
we
pass
the
dev
flag
and
this
is
to
to
to
be
implementing
this
in
dev
mode
and
that's
the
famous
death
mode
of
the
quarkus
engine,
and
this
is
hooking
us
up
with
the
environment.
And
so
basically
the
operator
is
going
to
be
aware
of
all
the
changes
and
all
the
every
time
we
do
updates
on
the
source
code
and
he's
going
to
push
that
to
the
environment
and
roll
out
new
versions
for
us
right.
So
let's
run
this
command.
C
So
come
okay,
run
and
so
on,
and
when
I
do
this,
we
see
that.
Let
me
see
so
it's
going
to
all
right.
So
it's
pushing
the
definition.
You
see
it's
only
one
single
xml
definition
right,
one
source
file,
it's
pushing
this
to
the
environment.
The
operator
is
reacting,
it
is
deploying
you
know
a
container
and
we
do
see
the
traces
here
right.
So
we
see
three
traces
that
actually
contain
the
raw
id
from
the
spreadsheet,
and
so
there's
three
tokens
the
row
id
the
question
itself.
C
You
can
see
the
one
of
the
questions
here
and
the
team
owner,
so
we
see
here
architecture,
we
see
here
operations
and
we
see
here
development.
If
we
look
at
the
environment,
we
should
see
the
pod
running
so
here
it
is,
and
I'm
going
to
put
it
here
along
stage
two,
and
it
is
sorry
and
it
is
hooked
up
to
the
environment.
So
now
we
can
carry
on
as
developers
we
are
going
to
keep.
You
know,
making
changes
and
implementing
this
and
rolling
out
the
changes.
C
So
the
first
thing
that
we
want
to
do
is
to
clean
headers
because
we
want
to
send
emails,
so
everything
that
comes
from
kafka.
We
want
to
clean
that
and
to
prepare
the
headers
for
the
emails.
The
following
action
I
want
to
do
is
to
use
in
common
what
is
called
a
data
format
to
convert
the
json
structure
into
java,
and
this
is
simply
because
it's
a
lot
easier
to
then
access
those
parts
of
the
json
structure.
C
So
this
is
just
the
instruction
on
marshall
and
then
the
following
thing:
I'm
going
to
use
the
router
based,
you
know
pattern
so
to
root
to
root
the
the
execution
flow.
So
we
have
three
conditions,
because
we
have
three
different
teams,
and
you
can
see
here,
as
illustrated
on
this
little
animation,
that
this
is
the
content-based
router.
You
know
an
event
comes
in
and
we
decide
whether
it
goes
one
way
or
the
other
way
right
of
these
different
branches.
C
So
now
we
just
define
the
expression
here
and
we
say:
well,
we
need
to
look
at
the
token
that
tells
us
the
team
owner
right.
So
if
we
look
at
the
loads,
we
see
that
the
this
is
the
third
token
right.
So
the
third
token
so
one
so
this
index
zero
index
one
and
index
two.
So
we
need
to
look
at
index
two
there
so
body
index
two
and
then
check.
C
C
We
can
copy
and
paste
this
because
we
can
define
the
other
branches
in
the
same
manner
but
for
the
other
teams
right.
So
if
the
theme
is
going
to
be
development,
then
I
can
say
development
there's
no
paper
there.
Hopefully,
and
thirdly,
then
these
would
be
operations
for
operations
right
and
now
that
we
decide
the
three
branches
the
the
last
one
is
just
like
in
case.
Otherwise
right
we
could
do
error
handling,
but
it's
not
applicable
for
today's
demo.
So,
okay!
C
So
now,
if
it
is
architecture,
then
what
we
want
to
set
is
really
the
we
want
to
send
an
email
to
the
architecture
team.
So
we
define
the
the
target
email
address
right,
so
that
would
be
the
to
field
in
the
in
the
email.
C
So
we
can
say
all
right,
so
these
should
go
to
architecture
at
demo
camel
okay,
so
that
would
be
the
email
address
for
architecture
and
then
what
we
are
going
to
do
is
just
invoke
a
secondary
tunnel
route
that
is
going
to
resolve
the
sending
of
the
email.
So
the
moment
this
secondary
route
doesn't
exist,
but
we
can
create
it
very
quickly
with
a
direct
activity.
C
The
direct
component
is
just
in
camel
to
call
from
one
route
to
another
call
to
another
route
right,
so
we
leave
it
as
it
is
there
empty
for
now,
and
we
configure
these
call
to
that
second
route.
Okay,
so
that's
configured
now,
so
we
have
done
the
upper
branch
and
we
can
do
the
same
for
the
other
ones
right,
so
we
say
header.
So
this
should
go.
C
This
is
the
development
team,
so
this
should
go
to
the
development
inbox
right
so
develop
developer,
sorry,
the
development
development
and
then
again
we
call
the
subroutine
route
2
immediately
and
we
do
the
same
with
the
third
branch.
So
we
set,
the
header
in
this
case
is
going
to
be
operations
operations.
C
Let
me
change
that
there
operations
and
we
call
that
subroutine
there,
and
then
we
just
organize
this
so
that
it
is
a
little
bit
not
in
the
middle
of
things
right.
So
with
that
we
have
this
logic
in
place.
We
can
double
click
here
to
have
access
to
the
second,
you
know
route
that
we
defined
and
we've
done
a
lot
of
code,
as
you
can
see
here
on
the
left
right,
so
we
don't
want
to
continue.
We
just
want
to
validate
that.
C
So
far
things
are
going
according
to
plan,
so
I'm
just
going
to
trace
here
that
we
are
doing
the
right
thing.
So
I'm
going
to
say
address-
and
I
want
to
log
to
see
if
it's
pick
the
right
branch,
so
I'm
going
to
say
print,
please
the
header
variable
2
and
and
see
what
happens.
C
So
I'm
going
to
save
this
right,
I'm
going
to
save
these
changes
and
as
I
save
because
it's
hooked
up
with
the
environment,
it
should
roll
out
very
quickly
the
changes,
and
we
see
that
we
we
see
a
reaction
from
the
environment
and
the
operator
that
is
rolling
out
these
last
update
and
and
we
should
see
the
new
logs
with
the
new
traces
and
we
can
validate
if
things
are
normal
and
they
do
see
normal.
So
we
see
the
traces
here.
C
We
see
that
it
has
gone
through
the
different
branches,
so
branch,
one
branch,
three
and
branch
two-
and
we
do
see
the
architecture
you
know
email
address.
It
seems
fine
architecture,
operations
and
development.
There
seems
to
be
no
typos,
so
you
know
it
all
looks
good.
So
far,
so
we
haven't
made
any
mistake
and
all
is
left
for
us
to
do
really
is
to
send
the
email
itself.
So
we
just
have
to
complete
another
set
of
header
definitions.
C
So
we
have
the
target
address,
but
we
need
to
say
who
is
sending
this
from
and
in
this
case
we
are
sending
this
email
on
behalf
of
the
strategy
team
right.
So
strategy
is
the
one
that
is
asking
these
questions
so
strategy
and
so
that's
the
problem.
Then
we
need
to
set
the
subject
of
the
email
right
so
the
subject,
if
of
the
email,
this
is
going
to
be
hugely
important
because
we
are
going
to
inject
in
here
the
row
id,
because
that
will
help
us
later
on
to
populate
the
right
grid.
C
The
light
the
right
row
in
the
spreadsheet.
So
here
we
will
use
that
as
a
correlation
information.
So
question
id-
and
here
we
use
the
token
that
contains
body
that
contains
the
raw
id-
and
that
would
be
remember-
is
index
zero.
That's
where
we
have
the
raw
id
right.
So
we
have
the
subject
pair.
We
have
the
from
the
two
and
lastly,
of
course
we
need
the
payload
of
the
of
the
email.
So
basically
that
would
be
the
question
itself
right.
C
So
we
picked
the
question
and
that's
the
token
in
the
middle,
and
so
that's
a
body
index,
one,
that's
where
the
question
is
and
finally
now
we
can
send
this
via
email
choosing
the
component
using
the
smtp
protocol
and
right.
So
that's
the
the
one
that
is
going
to
effectively
send
that
email
right.
So
we
can
save
this
and
see
what
happens
so.
Click
save
there
on
screen
and
then
quickly
roll
to
the
console,
and
we
do
see
that
again.
C
You
know
the
client
is
pushing
the
code
update.
The
operator
is
picking
that
up
destroying
the
previous
version
and
deploying
this
new
one
and
we
should
see
more
or
less
the
same
traces
there
you
go.
We
have
them
on
screen,
but
we
trust
that
these
went
to
the
mail
server
okay.
So
let
me
just
make
sure
that
my
tunnels
are
up
and
running
because
I
want
to
connect
with
my
local
client
made
client
to
that
email
server.
So
I'm
doing
that
there
I
have
the
tunnels
open.
C
Then,
if
I
open
my
client,
this
is
my
local
email
client.
Then
there
you
go
so
I
got
my
three
emails
from
for
the
three
different
teams
right:
fine,
we
can
answer
to
those
later
on,
but
basically
this
completes
the
second
stage
right.
So
we
have
stage
one
and
stage
two
completed
so
now
we
can
go
to
the
third
stage
and
this
one
again
is
super
simple.
C
This
is
the
one
that
enables
fetching
the
replies
from
the
males
and
then
pushing
those
to
again
a
different
topic
to
the
answers
topic
to
kafka,
and
so
here
again
I
can
use
a
camera
binding
and
just
you
know,
choose
the
right
source
and
bind
that
to
kafka.
So,
let's
quickly
look
at
that,
I
can
actually
stop
this
because
we
have
already
delivered
the
emails
so
I'll
stop
this.
Let
me
just
put
that,
like
that,
I'm
going
to
close
my
visual
editor
for
now
and
then
open
that
cabinet
binding.
C
So
that's
stage
three
and
just
to
show
you
very
quickly
again,
you
know
one
yama
definition,
comet
binding.
We
give
it
a
name
stage
three
and
we
pick
the
right
camel.
We
fill
up.
We
configure
the
parameters.
This
is
to
connect
to
the
local
mail
server
in
openshift,
and
then
we
have
a
middle
action
here.
So
this
is
using
a
hamlet
to
transform
the
mail
into
a
json
structure.
C
That
will
be
very
handy
for
us
on
stage
four,
and
then
we
bind
to
this
kafka
topic
again.
So
this
is
the
answers
and
you
see
that
no
connectivity
details
needed
the
operator
will
resolve
that
for
us
right.
So
let's
push
that
to
the
environment
and
we
say:
okay,
scripts
demo
stage,
sorry
stage
three,
so
I
press
enter
and
we
should
see
in
the
environment.
C
Hopefully
there
you
go
so
it
is
spinning
stage
three
that
I'm
going
to
place
there
so
stage
3
is
up
and
running,
and
now
we
need
to
deploy
stage
4.
That
is
going
to
be
consuming
those
answers
from
kafka
and
then
pushing
that
to
google
sheets.
Here
again
is
not
that
trivial
because
we
actually
need
to
correlate
the
information.
We
need
to
pick
that
information
about
the
raw
id
and
know
exactly
which
role
in
google
sheets
we
have
to
update.
C
So
we
use
the
common
language,
the
camel
dsl
to
do
these
things,
so
we
are
not
going
to
be
implementing
it
like
earlier,
but
I
can
show
you
very
quickly
what
the
logic
does.
So
this
would
be
stage
four.
I
can
close
this
open
this
with
the
editor
very
quickly
and
go
to
the
key
parts
of
the
definition.
So
let
me
do
this
very
quickly,
so
the
first
part
is,
of
course,
that
we
need
to
consume
these
events
from
the
kafka
topic.
C
Answers
right
right
after
that
we
do
this
little
transformation
once
again,
because
it's
easier
for
us
to
access
the
tokens
and
everything
so
we
convert
to
java.
We
do
some
more
manipulations
here,
some
preparations,
but
eventually,
then
we
are
delegating
on
this
activity.
That
is
a
secondary
route
on
on
the
camel
definition.
C
We
can
click
there
and
you
can
see
that
what
we
do
is
we
prepare
some
parameters
from
the
google
api
parameters
with
a
java
process,
because
this
is
using
the
java
api
and
then
using
the
common
component
to
push
this
to
google
and
okay.
This
is
the
configuration
is
on
a
properties
file
that
we
deploy
at
the
same
time,
but
this
is
how
we
are
pushing
these
to
google
sheets
right.
C
So
let's
execute
that,
so
we
say
stage
four
and
when
I
click
enter,
we
should
see
again
in
the
environment
that
this
is
spinning.
So
the
operator
is
seeing
that
definition
and
it
is
a
spinning
stage,
four
right
and
at
this
stage,
basically,
we
have
the
full
return
flow
right.
So
these
two
flows
and
basically
at
this
stage,
then
we
can
answer
those
demands
right.
So
we
can
pretend
to
be
one
of
those
teams
and
and
then
answer
those.
C
So,
for
example,
if
we
are
someone,
so
you
see,
I
have
three
different
inboxes
right.
So
if
I
click
on
architecture,
they
have
this
email
that
came
to
them.
I
can
click
on
it.
I'm
going
I'm
going
to
increase
this
so
that
we
can
see
the
question
earlier,
but
we
put
our
architecture
hat
right.
So
I'm
putting
my
architecture
hat
now
and
we
pretend
to
be
someone
from
architecture
and
we
say
right
so
they
pick,
you
know
the
the
mail
and
they
see
after
our
recent
company's
acquisition.
C
How
will
we
integrate
their
systems
with
ours
right?
So
we
can
reply
here
in
the
bottom
and
I
can
enter
here
the
answer
and
say
well.
Our
plan
is
to
use
camel
k
for
all
our
integrations
right.
Then
I
can
send
this
and
before
I
send
actually
I'm
going
to
show
here
the
spreadsheet
and
because
we
have
enabled
that
data
flow
it
should
automatically.
C
You
know,
capture,
you
know,
push
this
to
kafka,
consume
it
and
then
push
it
to
the
spreadsheet,
and
hopefully
it
should
work
and
there
it
is.
It
has
populated
the
raw
as
you
can
see,
on
screen,
so
that
has
worked
so
the
return
flow
is
working
and
now
we
can
do
the
same
thing
from
the
development
team
right.
So
now
we
are
someone
from
development
and
same
thing.
I
need
to
change
my
hand
right.
C
So
now
I
am
someone
from
development,
so
I
changed
my
hat,
and
so
we
look
at
the
question
there
and
we
see
that
it
says.
Are
we
still
having
problems
hiring
developers
because,
as
we
know,
you
know
finding
skills
is
a
difficult
thing,
but
this
guy
he
knows
the
answer
and
then
he
replies
and
says.
Well,
you
know
actually
actually
with
a
camel
k.
That
is
less
of
a
problem,
because
we
have
these
constructs
with
cabinet
bindings
that
you
don't
really
know
need
to
have
experience
with
camel
k.
C
You
just
fill
parameters,
these
are
no
code
building
blocks
and
with
that
you
can
enable
data
flow
right,
so
we
send
that
and
immediately
we
see
that
it
gets
populated.
And
finally
we
are
someone
for
operations.
Okay,
so
I
put
my
operations
hat
once
and
then
here
we
go
a
guy
from
operations.
I
look
at
the
inbox,
I
see
this
question
and
it
says:
have
we
figured
out?
How
will
we
improve
our
long-running
batch
processes
at
nights?
C
And
we
know
that
you
know
batch
processes
are
very
painful
right,
so
they
demand
a
lot
of
computing
resources
and
they
are
very
long,
but
these
guys
are
say
he
says.
Let
me
magnify
that
we
are
implementing
implementing
sorry
implementing
real
time
data
flows
using
amq
streams,
which
this
is
really
kafka
right
and,
of
course,
with
real-time
flows.
They
don't
need
bad
processes
at
all
right,
so
they
send
this
and
and
then
effectively,
let's
see
if
it
gets
to
the
last
row
and
yes,
we
have
it
right.
C
So
with
that
we
actually
have
completed
the
we
have
closed
the
circle
with
the
full.
You
know
go
to
the
departments
and
then
got
the
answers
and,
lastly,
I
have
a
fifth
stage.
I
wanted
to
show
to
you
to
close
the
demo
where
we
are
going
to
actually
be
justifying
why
we
are
using
mq
streams.
C
We
are
replaying
these
topics
because
we
want
to
actually
generate
some
reporting
and
push
that
to
google
drive
and
that
would
be
very
useful
for
the
higher
management,
because
we
can
on
this
report.
We
can
see
that
collaboration
right.
So
let's
look
at
that
very
quickly
again.
This
is
not
a
non-trivial
thing,
so
we
use
the
tamil
dsl.
C
So
if
we
close
this-
and
we
open
very
quickly-
the
last
so
don't
say
this-
the
fifth
stage
again,
I
can
open
this
with
a
visual
editor
and
look
very
quickly.
So
in
here
what
we
are
doing
is
we
are
consuming
both
from
the
questions
and
from
the
answers
right.
C
So
we
do
see
there
that
we
need
a
camel
route
to
consume
the
questions,
and
then
we
have
another
equivalent
route
that
is
going
to
consume
from
answers
and
both
do
more
or
less
the
same,
some
java
again
json
to
java,
and
then
it
delegates
on
this
action
that
is
processing
some
street
events
and
actually
we
jump
in
there.
We
see
that
there
is
a
so
let
me
see,
I
think
that
went
a
little
bit
funny,
but
it
should
be
there
there
you
go
so
we
have
an
aggregator.
C
So
this
is
using
the
aggregator
eiep
and
you
can
see
by
the
animation
that
basically
it
gets
multiple
events.
So
these
events
are
the
questions
and
answers
and
we
are
pairing
them
together
and
and
with
this
then
we
can
then
combine
questions
and
answers,
and
then
we
need
to
do
a
second
stage
of
aggregation,
because
we
want
to
put
all
of
that
into
one
single
document
right
and
so
all
the.
C
Although
that
collection
of
questions,
then,
finally,
we
render
the
pdf
with
using
the
java
api
to
create
that
document,
and
then
we
upload
to
drive-
and
so
in
this
case
again
similar
as
with
google
sheets,
we
prepare
the
api
parameters
and
then
we
push
this
to
a
google
drive
right.
So
let's
run
that
and
stage
five.
C
So
when
I
press
enter,
we
should
see
that
again
in
the
environment,
we
have
a
stage
of
five
spinning
there.
It
is
and
remember
this
is
consuming
from
both
topics,
and
this
is
the
google
drive
the
target
folder,
where
google
drive,
where
you
know
camel,
should
push
the
reports.
Remember
it
is,
you
know,
consuming
from
both
topics,
aggregating
all
the
information,
rendering
the
pdf
report
and
then
hopefully
pushing
the
document
in
here
in
here.
If
everything
goes
according
to
plan
and
there
it
is,
it
showed
up.
C
So
this
is
the
report
and
if
I
double
click,
then
I
can
open
these
documents
and
effectively.
We
see
the
questions
from
strategy
and
where
we
have
all
the
questions
delivered
by
strategy
and
the
different
teams,
so
development
operations
and
architecture
responding
to
those-
and
you
know
the
big
boss-
is
going
to
be
happy.
He's
going
to
see
this
magnificent
collaboration
between
the
teams
he's
going
to
think
that
this
you
know
the
company
is
in
good
health
and
well,
and
that
closes
really
the
demo
for
today.
C
This
is
the
full
end
to
end
flow
of
those
five
stages,
and
with
that
we
have
finish,
I
believe
so
end
of
the
demo
and
for
sure
we
have
some
questions
and
some
time
happy
to
address
those
and.
A
Follow
but
I
it
was
great
that
you
managed
to
actually
close
the
loop
and
and-
and
we
saw
all
the
components.
Yes,
there
is
a
question,
maybe
the
only
one
we
have
time
for
and
it's
about.
I
think
it's
twofold,
I'm
not
sure
how
to
pronounce
the
game.
A
I
chronic
chronic
ii
as
the
game
in
the
chat
he's
asking
whether
you're
aware
of
vulnerabilities
between
consuming
google
apis
and
the
rail
server,
and
I
suppose
that
that
also
translates
onto
offering
shift
in
general
they're
also
asking
whether
they
could
do
something
similar
for
officer
office,
365
apis
for
microsoft
and.
C
A
C
Yeah,
so
to
be
honest,
it's
all
about
checking
checking
the
connectors
that
we
have.
I
mean
we
have
a
specific,
dedicated
connect.
So,
okay,
let's,
let's
jump
to
the
first
questions
so
about
vulnerabilities
right,
so
I
mean
this
question
would
apply
to
any
connector,
I
think,
and
any
data
flow
and
anything
right.
So
for
that
we
have
mechanisms
in
place.
So
I
don't
have
a
straight
answer
as
a
very
specific
answer,
but
basically
we
put
the
mechanism
the
mechanisms
in
place
if
we
detect
any
sort
of
vulnerability.
C
Of
course
you
know
the
community
reacts
immediately
and
to
patch
that,
especially
you
know.
That's
where
the
support
from
red
hat
comes
and
and
then
we
we
would
immediately
patch
that
upstream
and
downstream-
and
so
you
know
occasionally
we
find
them
right.
So
we're
not
going
to
deny
that,
but
we
have
like,
I
said,
the
mechanisms,
the
secrets
and-
and
you
know
the
ways
to
to
to
address
that
as
per
the
other
question
was
yeah
the
connectors.
C
So
there's
a
ton
of
connectors
there's
actually
always
two
strategies
to
address
this,
so
we
have
specific
connectors
like
the
ones
I
shown
to
you,
but
also
typically,
these
are
always
http
apis
right.
So
so,
if
you
know
the
component
didn't
exist
for
you,
which
can
be,
although
there's
like
300
connectors,
so
I'm
kind
of
pretty
confident
that
it
should
be
there
somewhere.
C
But
if
not,
you
can
extend
camo
and-
and
you
know,
customize
a
common
component
with
that
particular
manipulation
and
or
create
a
hamlet,
binding.
You
know
and
sorry
a
camel
it
so
you
have
all
of
that
the
reach
of
your
hands.
So
it's
definitely
not
limiting.
If
you
don't
find
the
connector
but
but
and
then
you
know,
there's
as
well
the
traditional
http
interaction
right,
so
you
can
always
you
know,
configure
your
http
call
with
the
right
tokens,
the
right
credentials
and
everything
and
you
should
be
sorted.
B
One
very
yourself
more
or
less
three
minutes.
If
you
have
any
specific
advice
regarding
this:
the
cicd
workflow
of
camel
k
project.
If
we
think
about
argo
cd,
something
that
you
would
like
to
share
in
terms
of
best
practices
or
just
one
thing,
because
we
have
three
minutes
right.
C
So
yeah
there's
this
discussion
regarding
this
topic
and,
to
be
honest,
it
really
depends
on
on
on
the
entity
or
the
organization.
Sometimes
they
are
happy
with
using
one
strategy
over
the
other.
C
So,
for
example,
with
with
camel
k,
definitions
as
you've
seen
on
the
demo,
all
we
are
doing
is
really
pushing
a
a
custom
resource
right
and
when
the
custom
resource
is
there,
the
operator
takes,
is
in
charge
and
actually
does
all
the
actions
you
know
produces
that
container
and
draws
it
out,
and
so
you
could
actually
define
pipelines
that
are
just
you
know,
pushing
or
deleting
resources
from
the
different
namespaces.
C
Now
some
organizations
are
concerned
about
this
because
they
prefer
to
just
use
immutable
not
having
to
construct
every
time
the
image,
but
I
know
about
some
customers
who
are
doing
this
and
are
very
happy
with
it.
They
have
no
no
problem
whatsoever,
but
for
those
that
are
concerned
there
is
a
parameter
in
camel
k
where
you
can
specify.
You
know
the
image
repository
and
you
can
specify
particular
images,
so
you
can
say
deploy
this
particular
image
on
this
space.
C
So
you
can
define
your
pipelines
like
that
and
then
just
promote
you
know,
define
you
know,
implement
in-depth
time
in
your
dev
space,
your
implement
your
flow
and
then
you
know
once
the
image
is
generated,
keep
it
keep
that
on
under
yeah
on
the
image
repository
and
then
indicate
camel
k
to
point
to
that
particular
image.
A
Yeah
I
had
a
very
maybe
stupid
question
and
and
then
over
the
top
of
the
hour
we
may
close.
So
I
saw
some
coding
that
obviously
was
assisted
by
the
by
by
the
that
extension
to
the
excellent
work
by
the
way
really
love
it.
But
can
once
you've
done
that
and
you
realize
that
could
be
a
reusable
component.
Could
that
be
transformed
into
a
hamlet.
C
Correct,
okay,
so
actually
that
is
it's
a
good
question
is
the
part.
That's
part
of
the
process
of
of
producing
a
hamlet
right,
so
part
of
the
development
process
of
a
hamlet
is
like.
Okay,
just
you
know,
implement
it
with
regular
camel
right
with
a
regular,
camel
dsl
make
sure
it
works
right.
It
does
all
the
things
that
you
want
and
then
just
strip
out
that
piece
of
functionality
and
define
it
as
a
hamlet.
C
So
the
only
difference,
though,
is
that
comlets
and
camera
bindings
they're
always
defined
with
the
yaml
dsl
right.
So
that's
supported
in
camel,
okay,
the
yamut,
so
actually
kamal
k
supports
various
languages,
so
yaml
xml,
java
and
others,
and
so
basically
all
you
need
to
do.
C
In
that
case,
you
would
need
to
translate
whatever
dsl
you're
using
into
yaml,
but
you
know
you
follow
the
rules,
but
yeah
you
strip
out
that
functionality,
you
put
it
as
a
as
a
source
or
as
an
action
or
as
a
sink,
because
in
reacting
camel.
That's
all
you
have.
You
have
a
sources,
you
have
targets,
and
then
you
have
middle
steps.
So
the
answer
the
the
short
answer
is
yes,
you
can
do
that.
A
And
thank
you
thank
you
for
that.
I
need
to
research
and
understand
a
little
bit
more,
for
example,
your
the
visual
tool,
whether
it
supports
the
other
like
yaml,
for
example,
or
not,
and-
and
I
know
that
there
are
various
competing
visual
tools
that
eventually
may
make
into
into
a
product
and
we're
not
there
yet,
but
it's
actually
very
interesting
to
see
I
played
with
it
a
little
bit
with
my
little
knowledge
of
what
of
what
he
could
be
doing.
C
Perhaps
just
I
don't
know
if
you
can,
I
don't
know
if
you
did
already,
but
you
know,
of
course
all
these
demo
is
available
on
github,
so
you
have
all
the
source
elements
there
that
you
can
inspect
and
try
yourself
if
you
want.
I.
C
A
The
chat
for
those
that
want
to
check
it
out
a
little
bit
more
fabio
fabio.
Do
you
remember,
what's
what's
happening
next
week.
B
Yeah
next
next
week
we
are
going
to
speak
about
the
news
regarding
training
and
certification
for
all
the
things
about
open
shift,
container
development
basis
and
stuff
like
that.
So
I
think
it's
a
would
be
a
very
interesting
topic
because
obviously,
training
and
improving
the
knowledge
regarding
all
this
stuff
coming
but
openshift
is
something
very
important
for
the
companies
developers,
community
and
so
on
so
forth
and
bruno
thanks
again
for
for
the
great
session.
It
was
great
really.
It
was.