►
From YouTube: Lightning Talk: Integrating Debezium and Knative or How to Stream Changes t... Christopher Baumbauer
Description
Lightning Talk: Integrating Debezium and Knative or How to Stream Changes the Knative Way - Christopher Baumbauer, Atelier Solutions
This talk will highlight some of the work Chris did to take Debezium from streaming database change events into Knative to ensure an in-cluster data cache is kept up to date. While highlighting one useful use-case, the talk will go into more details on what it took to add support for streaming events using Knative instead of Apache Kafka, as well as some of the caveats and pitfalls to beware of if you are also looking at how to convert your microservices into a Knative enabled service.
A
So
hi
everyone,
I'm
chris
in
casa,
you
guys
don't
know
me,
I'm
the
face
of
atelier
solutions.
I've
been
working
with
k
native
for
quite
a
while
sorry
looks
like
we
actually
have
timing
mode
as
well,
so
did
some
stuff
with
gitlab
introducing
their
serverless
stuff.
A
Also
wrote
a
k-native
runtime
using
some
of
the
early
stuff
that
trigger
mesh
had
provided
and
very
recently
started
working
with
creating
a
new
meetup
within
my
region
of
northern
california
for
cncf
placer
so
hoping
to
bring
at
least
the
cloud
native
and
some
of
the
candidates.
Sorry,
the
cloud
native
k-native
stuff
into
a
smaller
community
to
grow
tomorrow's
youth.
So
I'm
going
to
tell
this
more
as
a
story
we're
going
to
start
off
with
an
idea
that
I
bantered
back
and
forth
a
couple
of
years
ago.
A
As
far
as
trying
to
find
a
way
of
modernizing
systems,
especially
for
larger
companies,
they
have
their
databases,
they're
more
likely
not
going
to
get
rid
of
it,
but
they
still
want
to
go
towards
the
cloud
they
want
to
experiment.
They
want
to
do
something
useful
with
that
data,
so
we
have
an
oracle
database.
We
want
to
capture
the
changes
that
come
in
for
all
the
normal
crud
operations.
A
So
how
do
you
do
that?
So
at
the
time
we
did
look
at
the
bezium.
It
was
fairly
early
in
the
project
supported
postgres,
my
sequel
they'd
started
playing
around
with
oracle,
but
it
was
also
very
tied
to
kafka
and
kafka
connect
in
particular,
and
one
of
the
awesome
things
about
k-native
and
kubernetes
in
general
is
that
it
does
break
people
free
from
that
vendor
lock-in.
A
So
we
decided
to
as
seb
mentioned
in
his
previous
talk,
go
the
frankenstein
route
where
I
wrote
a
k-native
source.
It
worked.
It
was
very
ugly
and
trying
to
set
up
the
installation,
configuration
and
tracking
all
the
database
changes,
which
was
real
pain
so
fast
forward
to
about
a
year
ago
and
then
re-examined.
The
question
looked
at
debesium
again
and
then
realized.
A
Oh
hey
with
the
bzm
they've,
actually
broken
up
part
of
the
kafka
connect
aspect
and
have
decided
to
start
supporting
other
providers
such
as
kinesis
google,
pub
sub,
but
nothing
on
the
k
native
side.
So
why
don't
we
just
go
ahead
and
k-natify
it
and
on
the
plus
side,
as
a
part
of
their
breakup
of
the
or
the
bcm
server
component,
to
support
these
other
cloud-based
venting
systems.
A
They
added
support
for
cloud
events
which
actually
helps
a
lot,
but
then
we
also
need
to
containerize
it
which,
for
their
default,
install
it
ships
in
docker,
okay.
So
that's
two
out
of
the
three
things.
The
only
thing
that
was
missing
is
a
way
of
being
able
to
take
those
database
changes
and
stream
them
out
into
the
k-native
pipeline.
A
It
got
accepted
and
is
part
of
the
1.9
release
for
dibizium,
which
now
exposes
an
http
client
that
will
stream
your
database
changes
into
a
listening
web
hook,
and
this
is
where
the
awesomeness
of
k-native
comes
in,
because
by
exposing
something
like
k-sync,
I
can
now
use
things
like
sync
binding
to
pipe
to
a
broker
or
a
trigger
wherever
and
okay,
so
that
was
pretty
much
it
and
with
profit
so
for
sample
integration,
which
I
do
have
as
a
part
of
a
github
repo,
is
pretty
much
going:
an
on-prem
database
into
a
dbzm
service
which
spits
things
out
to
a
broker
which
leverages
k,
native's
trigger
and
a
k
native
service
to
massage
that
data
dump
it
into
redis,
where
I
have
another
kubernetes
service
listing
on
the
back
end
to
report
the
results,
in
this
case
it's
more
of
a
project
voting
system,
but
through
the
process
of
going
through.
A
All
of
this
I
mean
the
nice
thing
about
java
and
quercus
and
a
lot
of
the
work
and
are
that
has
gone
into
the
ecosystem
for
the
last
20.
Some
odd
years
is
that
you
have
a
properties
file.
You
can
expose
it
as
environment
variables,
you
can
set
it
in
a
config
map.
You
can
put
it
into
or
pretty
much
directly
into
your
deployments
works
out
great.
A
Unfortunately,
getting
the
two
to
play
together
is
still
a
little
bit
up
in
the
air,
so
protecting
secrets
is
a
bit
of
a
balancing
act,
but
it
does
also
go
to
show
that
with
k-native
itself,
it
does
act
as
like,
providing
the
lego
bricks
with
everything
listening
on
a
common,
poor
speaking
cloud
event,
which
makes
it
easier
to
plug
data
in
and
out
of,
but
there's
it
still
requires
a
bit
of
work
to
go
through
some
of
the
functions
work
that
was
discussed
earlier
today
will
help
some
of
the
transformations
stuff,
especially
with
regard
to
exchanging
payloads
going
from
a
source
to
a
target,
is
still
requires.
A
That's
kind
of
watching
out
for
all
the
changes,
otherwise
additional
instances
changes
the
clobber
each
other
and
you
it
does
still
require
that
domesium
itself
have
access
to
the
on-prem
database,
but
once
you
get
it
from
the
database
into
your
cluster,
you
can
still
stream
it
wherever
which
way
you
want.
So
that's
pretty
much
it
for
the
spiel.
If
you
want
to
learn
more,
I've
got
contact
info
as
well
as
the
place
where
you
can
pull
the
code
and
that's
pretty
much
all
I
got
so.
Thank
you.