►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
thank
you
for
coming
to
this
cloud
native
computing
foundation
online
program
through
cncf.
My
name
is
daniel
from
datagrade
and
today
we're
gonna
be
talking
about
creating
a
serverless
ipass
in
kubernetes
with
apache
camel
k.
We're
really
excited
to
talk
about
this
topic
with
you,
and
hopefully
you
follow
along,
and
you
and
you'll
learn
something
about
about
this
topic,
so
just
to
kind
of
briefly
cover
what
we're
gonna
be
talking
about.
Today,
we're
gonna
be
talking
about
building
your
own
bypass.
Why
you
want
to
do
that?
A
What
kind
of
advantages
it
brings
for
your
developers
we're
going
to
be
introducing
apache,
camel
and
apache
camel
k?
Well
then,
we're
going
to
talk
about
building
a
serverless
ipass
on
kubernetes
through
apache
camel,
and
we're
also
going
to
show
you
a
brief
demo
to
show
how
that
happens
in
real
time.
A
So
just
a
little
bit
about
us
and
what
we
do.
As
I
said,
my
name
is
daniel.
I'm
the
marketing
manager
here
at
datagreat.
We
also
have
our
founder
and
ceo
andrei
schluska,
who
is
going
to
be
taking
you
through
the
demo
portion
of
this
program
and
we
at
datagreat.
We
we
help
people
to
build
integrations
on
apache
camel.
We've
been
doing
that
for
a
number
of
years.
A
Now
we
built
this
tool
genic
to
help
people
easily
build
integrations
cloud
native
integrations
through
apache,
camel
to
easily
get
things
up
and
running
to
easily
build
apis
to
connect
their
infrastructure
together,
based
on
the
patterns
and
connectors
that
are
implemented
in
camel
already.
A
So
today
we
were
talking
about
ipad,
specifically
integration
platforms
as
a
service.
You
know
this
tool
set
really
started
to
enable
the
citizen
and
the
ad
hoc
integrator
space
to
grow,
rather
than
just
have
it
be
strictly
for
technically
inclined
developers,
integration,
specific
developers,
it
allowed
less
technically
knowledgeable
users
to
create
integrations
that
their
enterprises
needed.
You
know,
developers
that
maybe
have
less
specific
training
in
building
integrations
that
are
a
little
more
general
use
developers.
A
You
know
these
ipas
platforms,
give
them
flexible
platforms,
usually
with
low
code
or
no
code
development
to
help
people
who
again
maybe
aren't
as
technically
inclined
on
the
integration
front
to
still
create
real-time
cloud-based
integrations
third-party
integrations
as
well
connecting
their
legacy
or
their
on-premise
systems
onto
modern
cloud
environments,
not
specifically
designing
them
for
it
or
development
teams,
making
them
more
accessible
than
say
the
enterprise
service
bus
model.
A
You
know
connecting
these
disparate
systems
together
is
one
of
its
major
business
functions,
creating
and
managing
apis
to
pull
and
sync,
whatever
data
from
one
area
of
your
infrastructure
to
another.
Doing
this
all
in
real
time
too,
pushing
or
pulling
data
from
your
sources
and
watching
that
data
flow
between
integrations,
as
it
happens,
transforming
the
data
as
well
converting
it
before
it
hits
its
target
based
on
established
data
transformation
patterns
and
again
pre-built
connectors
for
any
common
integration
use
cases.
A
So
the
integration
developer
really
is
the
chief
operator
here
and
if
they
can't
get
integrations
up
and
running
quickly
and
efficiently,
then
your
integration
project
will
never
be
able
to
get
off
the
ground.
So
we're
really
talking
about
ipas
4
developers.
Here
it
gives
the
developers
the
ability
to
deploy
and
to
manage
these
complex
integration
streams
connecting
parts
of
your
company's
critical
infrastructure
together,
as
you
need
it
to
happen,
using
these
pre-built
integration
templates
to
connect
specific
third-party
applications
or
to
send
pull
and
push
requests
between.
A
You
know
whatever
server
or
you
know
hardware
you
need
to
you-
need
to
connect
to
you're,
essentially
creating
an
ecosystem
through
which
all
of
your
business
applications
can
automatically
receive
whatever
critical
information
it
needs
in
real
time,
no
matter
where
that
information
is
coming
from.
A
You
can
modify
paths
between
integrations
and
implement
your
connectors
and
patterns,
as
you
see
fit
and
giving
you
the
freedom
you're,
giving
your
developers
the
freedom
for
customized
deployments,
handling
different
types
of
integrations,
and
you
know
different
integration
functions
again,
based
on
these
pre-built
resources,
deploying
them
wherever
they
need
to
go
developing
routes
that
fit
into
whatever
environment
you
intend
to
deploy
to,
and
configuring
them
automatically
configuring
those
routes
and
integrations
to
again
whatever
environment
that
you
need
to
run
them
in
now.
A
In
talking
about
the
framework
that
you
want
to
build,
you
know
an
ipad
solution
on.
Well,
that's
where
apache
camel
comes
in
and
apache
campbell
is
a
java
based
integration
framework
based
on
established
integration
patterns
from
the
book
enterprise
integration
platforms
by
gregor
hope
and
bobby
wolf
has
supports
for
numerous
dsls
and
data
formats
as
well:
xml,
csv,
json,
yaml,
etc
for
receiving
and
sending
specific
information.
A
Depending
on
what
kind
of
data
you
need
to
receive
operates
on
a
messaging
system.
You
know
these.
This
messages
contain
information
that
are
determined
by
the
sender.
You
know
it's
contained
in
a
header
body,
id
timestamp.
Whatever
information
you
need
to
deliver
sends
that
to
the
receiver
traveling
from
endpoint
to
endpoint
through
the
camel
context
runtime,
which
we'll
kind
of
we'll
dive
into
that
in
a
bit.
That's
where
data
is
run
through
the
processes
and
components
before
it's
delivered
to
its
destination.
A
So,
let's
take
a
brief
look
into
the
camel
architecture
itself.
It's
all
based
around
that
camel
context.
Runtime
and
the
core
aspect
of
that
is
the
routing
engine
and
that
connects
endpoints
to
the
camel
integration
platform,
as
well
as
connecting
endpoints
to
each
other
through
dsls,
again
operating
on
that
same
messaging
system,
and
that's
where
the
processors
they're
performing
tasks
like
routing
transformation,
validation,
etc.
Through
these
enterprise
integration
patterns,
and
then
you
have
camel
components
which
that's,
what
allows
camel
to
connect
to
external
systems
such
as
http,
ftp,
jms,
dns,
graphql,
irc,
amazon,
web
services.
A
Now,
when
it
comes
to
deploying
your
camel
integrations
on
kubernetes,
that's
where
apache
camel
k
comes
in.
This
is
the
branch
of
choice,
essentially
for
deploying
your
integrations
on
kubernetes.
So
it's
really
what
you
want
to
be
looking
for,
if
you're,
deploying,
if
your
deployment
environment
is
intended
to
be
a
microservices
or
a
serverless
deployment
through
native
or
openshift,
etc,
apogee
camel
k
automatically
configures
your
integrations
and
whatever
resources
you
deploy
on
apache
mlk
on
your
kubernetes
cluster.
For
this
serverless
architecture,
there's
essentially
no
need
to
worry
about
setting
up
your
kubernetes
deployments.
A
Now,
of
course,
when
we
talk
about
building
a
serverless
ipass,
you
know
it
helps
to
understand
why
you
know
what
is
the
benefit
of
serverless
for
integrations,
of
course,
with
serverless
architecture,
building
and
running
services
and
applications
natively
in
the
cloud
without
having
to
worry
about
configuring
or
managing
servers,
because
the
server
maintenance
is
all
handled
by
the
cloud
providers.
A
That
means
you
have
more
time
available
for
developing
the
complex
apis
that
you
need
to
create
to
keep
your
business
up
and
running.
A
A
Well,
obviously,
as
we've
talked
about,
it
reduces
costs
on
server
management.
Again,
there's
no
need
for
you
to
manage
the
physical
infrastructure
required
for
all
this
data
storage.
It's
all
done
through
your
cloud
provider
they're
taking
care
of
that.
Because
of
that,
because
your
cloud
provider
is
providing
these
resources.
Of
course
that
means
on
your
end,
this
is
scalable.
It's
a
scalable
deployment
that
grows
and
shrinks
alongside
your
ipass
optimizing
the
cost
and
resource
efficiency.
A
On
your
end,
while
still
giving
you
that
event-based
agile
architecture
that
serverless
provides
and
with
higher
availability
and
observability
across
your
infrastructure
to
boot,
it's
taking
advantage
of
this
cloud-based
data
storage
to
ensure
that
no
matter
where
you
are
and
no
matter
when
it
is,
you
can
access
the
integrations
and
apis
all
through
again
just
directly
onto
the
cloud.
A
The
issue,
though,
is
that
your
sources,
no
matter
you
know
where
they
are,
and
if
they're
being
delivered
to
this
serverless
deployment,
they're
still
writing
data
in
various
formats
and
your
ipas
and
your
target
destinations
might
not
be
compatible
with
those
file
formats
without
a
common
framework
that
can
accept
inputs
received
as
messages
from
your
sources
and
translate
the
information
from
those
sources
into
a
desired
format.
So
your
targets
can
understand
them
without
a
common
framework.
A
A
Okay,
so
now
that
we've
gone
through
the
fundamentals
of
how
camel
and
camel
k
operates
and
also
explored,
why
ipas
is
so
beneficial
and
the
benefits
of
serverless
architecture
for
your
ipas?
We
now
it
comes
to
the
question
of
okay.
How
do
I
get
started?
A
Building
a
serverless
ipass
and
how
can
apache
camp
okay
helped
me
accomplish
that
from
setting
up
the
environment,
to
building
the
integrations
and
apis
to
testing
your
routes,
running
your
projects,
essentially
outlining
the
necessary
steps
to
connect
to
that
serverless
deployment
as
well,
and
this
is
where
we're
going
gonna
segue
into
the
live
demo
portion
with
andre
so
andre.
If
you'd
like
to
take
it
away.
B
Yeah,
hello,
everybody.
Thank
you
daniel.
So
much
for
this
for
this
topic
on
ipas,
so
my
name
is
andre.
I'm
the
cto
and
co-founder
of
datagrade,
and
what
I
have
for
you
today
is
a
demonstration
on
how
to
build
an
api
using
the
apache,
camel
k
framework
and
get
that
deployed
into
kubernetes.
B
So
before
we
before
we
get
started.
Let
me
just
from
a
developer
standpoint
guide.
You
through
apache,
camel
k
what
that
is
and
how
that
actually
works.
So
the
usual
approach
to
do
some.
Some
software
development
using
apache
camel,
would
be
to
have
a
and
a
development
environment,
something
like
eclipse
or
intellij
or
visual
code
and
then
build
a
maven
project
build
some
spring
boot,
microservices
or,
however,
the
approach
approach
might
look
like
now
for
camel
k.
B
This
is
a
little
bit
different,
because
this
is
what
we
actually
need
is
only
the
actual
dsl
integration
in
camel.
B
B
B
This
is
something
we
we
all
know
already
for
for
apache
camel
developers
who
are
new
to
camel
you,
you
might
want
to
look
into
the
camel
dsl.
B
So
what
we
ride
is
the
camel
dsl
and
essentially
we're
going
to
throw
it
over
the
fence,
as
I'd
like
to
call
it
into
the
kubernetes
cluster
in
the
kubernetes
cluster.
What
we
have
is,
we
have
an
operator
pattern.
The
operator
basically
waits
on
us
to
deploy
an
integration,
and
what
that
means
is
we're
going
to
end
up
having
a
a
dsl
which
we
hand
over
to
this
operator
and
then,
instead
of
us
dealing
with,
how
can
we
build
a
jar
file?
How
can
we
get
to
an
executable?
B
It
is
the
duty
of
this
specific
operator
to
build
all
that
stuff
for
us
to
build
it
correctly
and
to
get
it
deployed
as
a
pod
in
our
kubernetes
environment.
B
So
if
you
look
at
that
picture,
it's
basically
before
we
had
camel
k,
we
had
complex
code,
we
had
like
lots
of
java
files
we
needed.
We
had
configuration
files
in
in
many
things.
We
had
to
build
this
pipeline
using
cloud
tools
using
jenkins,
whatever
tooling
you
need,
and
then
we
had
to
get
that
deployed
and
monitor
that
somehow,
which
was
a
lot
of
work.
B
So
now,
with
the
camel
k
approach,
the
idea
is
again:
we
have
one
file
which
only
has
the
dsl
so
really
only
what
we
need.
We
then
have
one
command
line,
which
is
a
tool
provided
by
apache
camel
k
which
allows
me
to
essentially
interact
with
the
operator,
and
this
will
create
one
part
so
it'll
make
it
really
easy
for
us.
B
So
enough
talking,
let's
look
into
that.
The
question
might
be
okay.
How
can
I?
How
can
I
work
with
that?
So
we
here
datagrade?
We
obviously
use
our
tool
genic
to
do
that,
but
I
want
you
to
know
that
what
I'm
doing
here
is
really
something
you
can
do
with
any
tool.
Only
thing
you
will
need
is
a
text
editor
and
the
apache
camel
k
command
line.
B
What
it's
supposed
to
do
is
it's
just
going
to
be
exposing
an
api
on
our
kubernetes
cluster
and
it's
it's
supposed
to
read
a
file
on
our
local
hard
drive
in
this
case
in
the
kubernetes
cluster,
which
is
a
json
file,
and
it
reads
that
json
file
and
that
json
file
is
just
like
a
user
database
and
it
should
just
respond.
It
should
just
search
for
the
user
and
then
respond.
B
So
that's
that's
a
typical
use
case.
We
want
to
build
an
api
which
grabs
some
data,
converts
it
to
json
and
sends
it
back
to
the
caller
and
what
I'm
going
to
show.
You
is
how
you
can
do
that
in
basically,
three,
maybe
four
lines
of
code,
that's
it
so.
The
first
thing
we're
going
to
need
here
is
we're
going
to
need
to
have
an
api,
so
I
have,
I
have
done
a
little
bit
of
homework,
so
I've
created
an
api
already
here
in
the
system.
B
Again,
you
can
come
up
with
any
kind
of
open
api
specification
or
whatever
it's
it's
really
up
to
you,
so
we're
gonna
have
one
rest.
Api
get
call,
so
we
we're
gonna
expose
our
api
in
order
to
get
that
to
work.
There
is
something
called
camel
k
traits
and
we're
gonna
have
to
activate
the
so-called
ingress
trade.
We
have
to
enable
that
and
say:
okay,
which
is
the
host.
In
order
for
that
to
work,
you
will
have
to
have
an
ingress
controller
installed
in
your
kubernetes
environment.
B
That
is
something
the
cloud
provider
usually
has
ways
of
of
dealing
with
that
pretty
quickly.
So
I
have
created
that
already.
B
So
we
have
the
ingress
enabled
we
have
the
we
have
the
rest
api
and
what
we're
going
to
need
here
in
this
case
is
we're
going
to
need
our
our
json.
This
is
this
one
here,
and
this
is
just
a
very,
very
simple
json.
It
has
two
users
in
here
user
id
1
and
user
id
2.,
that's
really
about
it,
and
what
we're
gonna
do
is
we're
gonna
we're
gonna,
create
a
new
resource.
B
In
this
case,
I
have
pre-created
it
already,
so
I
can
use
that.
But
really
this
is
just
the
content
of
our
of
our
resource
file
and
if
we
instruct
chemical
to
to
have
a
resource
file,
what
it
will
do
is
it
will
simply
put
that
into
our
pod
into
the
folder.
Slash
etsy,
slash,
camel
slash
resources,
so
this
file
will
just
end
up
being
on
our
pot.
That's
about
it.
B
So
what
we're
going
to
do
is
the
first
thing
we
need
to
do
here
is
we
want
to
have
we
want
to
read?
We
basically
want
to
read
this
file
into
our
body.
B
B
So
I
can
do
that
again
with
the
body
operation,
but
now
I'm
selecting
a
json
path
and
again
I
did
a
little
bit
of
homework.
So
I
prepared
the
json
operation,
the
json
path,
and
what
this
will
do.
It
will
basically
look
up
the
user
id
which
is
coming
from
my
rest,
api
call
and
I'm
going
to
explain
what
that
means
in
a
second.
B
B
All
right,
so
I'm
going
to
run
this,
and
what
we'll
do
is
what
I
was
just
talking
about
in
my
in
the
powerpoint
presentation
is
it'll.
It
build
builds.
The
code
now
puts
it
into
the
operator,
and
the
operator
will
now
start
working
on
it,
making
sure
that
our
pod
gets
built
that
our
integration
gets
built.
So
we
can
monitor
that
here.
B
And
while
this
is
while
this
is
booting,
I'm
gonna
have
a
look
into
into
the
actual
code
what
it
created.
So
this
is
basically
it.
This
is
basically
one
two
three
lines
of
code:
we're
gonna
set
our
body,
we're
gonna,
read
this
and
then
we're
gonna
convert
that
into
a
jackson.
B
It
is
already
coming
up,
so
it's
already
built
it
is
it
it's
essentially
there,
so
it
just
takes
us
30
seconds,
usually
only
20
seconds,
maybe
15
seconds
to
boot.
This
whole
camel
k
integration
and
if
we
now
call
it
and
we'll
have
to
give
it
a
user
id,
then
it
responds
with
the
data
in
a
nice
beautiful,
json
format.
B
So
that's
about
it.
That's
really
all
it
takes
to
build
build
a
small
small
integration.
So
let's
have
a
look
and
revisit
the
code
one
more
time.
It's
just
one.
Two
three
four
lines
of
code:
it'll
it'll
basically
uses
the
simple
language
to
read
our
file.
B
It
uses
jsonpath
to
grab
the
specific
data
from
this
file
and
it
then
marshals
that
into
a
json
format.
This
is
really
all
it
takes
to
to
build
a
small
api
and
that's
that's
the
beauty
of
of
camel
it
pre-builds
the
ingress
controller
or
the
the
english
description
for
us.
In
a
way
we
need
it.
We
don't
have
to
care
about
that.
It
builds
the
the
service
for
us.
So
we
really
don't
have
to
care.
Take
care
about
anything.
B
So
this
is
basically
how
how
the
yaml
file
would
look
like,
but
again
nothing
you
have
to
worry
about.
It
is
all
being
done
by
the
camel
k
framework
so
yeah
guys.
That
is
about
it.
I
hope
you
enjoyed
the
demo
of
apache
mlk
and
you
get
a
sense
of
how
quickly
you
can
build
an
integration,
and
this
is
this
is
just
just
the
beginning.
So
really
the
apache
camel
k
framework
is
so
powerful.
It
allows
you
to
build
a
whole
middleware
integration
and
ipads
around
it.