►
From YouTube: Webinar: Event-driven architecture with Knative events
Description
In this session, presenters will review the progression from Monolith, to Microservices, to Events Driven architecture. Participants will learn about cloud events, how to use Knative Eventing as an events intermediary, understand Knative components, and extensibility with the operator model (Sources, Brokers). At the end, there will be an E2E demo showcasing EventSourcing, Custom Events, and Autoscaling with Serving.
Presenters:
Nicolás López, Senior Software Engineer @Google
Bryan Zimmerman, Product Manager @Google
A
A
Event-Driven
architecture
with
k-native
events,
I'm
jerry
fellain
and
I
will
be
moderating
today's
webinar
and
we
would
like
to
welcome
our
presenters
today,
nicholas
lopez,
senior
software
engineer
at
google
and
brian
zimmerman
product
manager
at
google.
We
just
have
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
were
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen,
so
please
feel
free
to
drop
your
questions
in
there
and
we
will
get
to
as
many
as
we
can.
At
the
end.
A
This
is
an
official
webinar
of
the
cncf
and,
as
such,
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chatter
questions
that
are
in
violation
of
the
code
of
conduct.
Please
be
respectful
of
your
fellow
participants
and
presenters,
and
please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
webinar
page
at
cncf,
dot,
io,
slash
webinars,
and
with
that
I
will
hand
it
over
to
nicholas
and
brian.
For
today's.
B
C
Hi
everyone,
I'm
nick
lopez,
I'm
a
senior
software
engineer
here
at
google
and
I
work
with
a
team
that
builds
key
native
eventing
and
other
related
products.
B
Great
so
today
we're
going
to
be
talking
about
k-native,
eventing,
we'll
start
out
with
some
basic
concepts
in
in.
First
of
all,
we
need
to
talk
about
the
history
of
how
we
got
to
to
this
place
when
it
comes
to
the
need
for
vendor
and
architecture,
and
so
we'll
start
talking
about
the
the
rise
of
mark
microservices.
B
So
to
start
we'll
talk
about
the
rise
of
microservices,
so
going
back
in
time,
most
applications
were
always
built
as
a
monolith
and
when
I
say
monolith
in
this
case,
I'm
referring
to
an
application
where
there's
that
single
application
layer
that
contains
everything
integrated
into
that
one
platform.
Everything
required
for
the
application
so
take
an
example
of
an
e-commerce
store
all
of
my
services,
whether
they
be
processing
payments
serving
up
the
website.
Managing
my
customers
sending
recommendations.
B
All
of
that
is
done
by
that
one
piece
of
code
or
large
piece
of
code
and
not
separated
into
individual
services.
There
are
some
inherent
issues
in
this
particular
pattern.
Most
notably,
this
can
scale
in
only
one
direction.
That's
vertically!
You
can
create
bigger
machines
with
a
little
work.
You
can
add
more
machines,
but
you
can't
scale
out
individual
components
of
the
application.
B
Secondly,
it
can
be
very
overwhelming
to
build,
deploy
and
maintain
in
terms
of
the
application
as
it
grows.
You
could
imagine,
as
you
add,
more
and
more
features,
functionality
and
teams
to
your
application
or
your
application
ecosystem
things
can
become
unwieldy
in
terms
of
impact
from
one
system
to
another.
This
makes
deployment
management
and
coordination
very
very
difficult
over
time,
and
similarly,
it
can
become
a
nightmare
to
change
anything.
B
B
So
the
solution
to
this
is
microservices,
which
I'm
sure
everybody
here
is
familiar
with,
and
talking
about
the
migration
path.
People
typically
don't
flip
a
switch
and
go
from
a
monolith
to
microservices.
They
typically
will
migrate
in
a
staged
way,
starting
by
separating
out
a
couple
key
components
that
may
benefit
most
into
their
own
services
attach
the
core
monolith.
B
B
B
B
B
So
what's
the
answer
to
the
problems
that
microservices
create?
Don't
get
me
wrong.
Microservices
is
a
great
application
pattern
that
solves
all
of
the
problems
we
talked
about
when
it
comes
to
monoliths
and
even
more
event-driven
architecture
is
an
attempt
to
try
and
solve
some
of
those
issues
with
complex
spiderweb
microservice
applications
to
understand
this.
First,
let's
talk
about
what's
an
event,
so
here's
some
key
concepts
to
get
there
first,
we
start
with
an
occurrence
occurrence
is
something
that
is
incurred
within
your
application
or
environment
that
could
warrant
an
action.
B
B
B
The
event
is,
then
triggered
sorry.
The
event,
then
triggers
an
action
and
we'll
describe
how
this
works
later,
specifically
with
k-native
eventing
in
simple
terms,
building
in
an
event-driven
way
ensures
that
you
are
reacting
to
the
facts
in
your
environment
or
facts
about
your
application,
rather
than
having
to
deconstruct
every
point-to-point
interaction
as
needed.
B
Let's
consider
an
example:
a
new
user
is
added
to
my
ecommerce
store,
so
in
a
microservice
application
that
I
would
maybe
have
a
new
user
service
or
something
responsible
for
that.
It
would
then
have
to
write
to
the
database
once
complete
signal
to
the
email
service
that
would
send
the
welcome
email
and
likely
a
host
of
other
onboarding
activities
that
would
be
required
when
that
new
user
is
added
to
the
system.
B
B
I
would
need
to
update
the
new
user
service,
potentially
other
services
that
depend
on
the
completion
of
that
compliance
and
then
build
the
compliance
service
as
well,
and
then
update
them
all
together,
because
there
is
the
requirement
of
interaction.
So
I
need
to
manage
that
deployment.
You
can
imagine
how
this
is
very
complex
in
an
event-driven
pattern.
I
only
need
to
build
my
new
service
and
respond
to
the
new
user
event.
This
makes
build
and
deployment
much
simpler
and
I
don't
have
to
manage
the
concerns
of
the
other
services.
B
So
in
an
event-driven
pattern,
there
are
three
key
concepts:
producers,
the
intermediary
and
the
consumers.
Your
services
don't
communicate
directly
to
each
other.
Instead,
they
communicate
to
an
event
intermediary
and
we'll
discuss
what
that
looks
like
in
canadian
eventing,
specifically,
that
intermediary
will
then
communicate
directly
with
the
consumer
applications.
B
In
some
cases
there
could
be
several
layers
of
intermediaries,
but
in
general,
for
our
purposes,
we'll
talk
about
just
one
layer,
we'll
discuss
this
in
a
moment,
but
it
should
be
noted
that
the
producer
nee
does
not
need
to
be
a
service
within
your
application
that
you
create.
Instead,
it
could
be
a
source
that
you
input
to
react
to
the
occurrence
and
deliver
the
corresponding
int
event
to
the
intermediary.
B
B
B
And
perhaps
most
importantly,
it
can
extend
organically.
We
discussed
in
the
earlier
example
of
having
to
add
that
new
compliance
service
in
this
type
of
pattern
we
wouldn't
have
to
affect
any
of
the
other
components
of
our
application
in
a
microservices
pattern.
You,
of
course,
would
have
to
so
in
a
lot
of
ways
when
you
are
intending
to
extend
your
application,
which
most
of
us
are.
This
model
offers
the
superior
ability
to
do
that
organically
without
having
to
worry
about
what
it
could
break
in
the
past.
B
So,
let's
see
how
k
native
can
fit
into
this
candidate
of
eventing
is
our
intermediary
of
choice.
It
is
a
set
of
composable
primitives
to
enable
late
binding
of
producers
and
consumers
for
an
intermediary
to
work.
There
needs
to
be
some
standardization,
otherwise
you
actually
still
have
to
build
in
a
shared
understanding
into
your
code.
For
example,
what
the
messages
will
look
like
how
they'll
be
delivered
to
address
this
k-native
utilizes
the
cloud
events
format
as
the
message
envelope.
This
is,
of
course,
a
cncf
project.
B
B
B
The
consumer
is
any
service
that
can
receive
the
event
and
the
broker
represents
the
event.
Mesh
events
are
sent
to
the
broker
and
then
sent
along
to
any
specific
consumer.
The
trigger
is
the
entity
that
defines
the
subscription
to
a
particular
event
by
said
consumer
and
thus
directs
the
filtering
appropriately.
B
Another
key
concept
to
talk
about
is
event
source
registry
and
event
registry.
This
is
a
key
component
to
developer
experience
if
you
have
to,
if
every
developer
has
to
learn
from
square
one,
what
events
are
available?
What
sources
are
available?
It
makes
it
a
lot
more
difficult.
So
there
exists
a
pattern
of
an
event
and
source
registry
where
you
can
learn
what
can
be
added
and
what
events
can
be
reacted
upon.
B
C
C
So
what
we're
looking
at
here
is
a
setup
in
which
we
have
a
gke
cluster
kubernetes
engine
cluster
running
it's
an
image,
processing
pipeline,
as
I
was
mentioning
so
the
purpose
of
this
pipeline
is
that
users,
on
the
left
hand,
side,
they
can
drop
images
on
what
we
call
heal.
C
The
images
input
this
cloud
storage
bucket,
and
what
we
will
want
to
do
is
to
connect
this
via
a
source
that
we
will
be
looking
in
a
minute
in
detail
which
will
trigger
an
event
that
will
be
received
by
our
filter
service.
C
Our
filter
service
will
be
in
charge
of
filtering
any
images
that
we
don't
want
to
process
in
our
system
if
they
have
content.
That
is
undesired
and
we
will
be
using
the
vision
api
to
do
this.
After
this,
the
filter
will
be
producing
an
event
as
well.
C
And
finally,
we
have
a
watermarker
service
in
the
middle,
which
will
be
receiving
an
image
we'll
be
receiving
an
event
produced
by
the
resizer
service,
and
this
service
will
be
adding
a
watermark
to
our
images
and
all
of
the
resizer
watermarker
and
label
services
will
be
dropping
the
results
of
their
processing
into
a
cloud
stretch
bucket
as
well,
which
we
have
called
here.
The
images
output
and
which
would
be
consumed
by
end
users
of
this
image.
Processing
pipeline.
C
So
we
already
have
the
setup
for
some
of
these
pieces
in
our
current
running
cluster.
C
What
we
will
be
doing
specifically
during
our
demo
is
creating
all
of
this
arrows
that
you
see
here,
specifically
we're
going
to
be
creating
the
arrows
to
connect
via
advanced
images
inputs
with
services
in
our
gke
cluster
and
to
connect
the
services
in
this
gk
cluster,
so
we'll
be
setting
up
all
of
these
in
the
next
few
slides
the
first
one
that
we're
going
to
be
setting
specifically
is
a
cloud
storage
source.
C
So
what
this
one
will
be
defining
and
we'll
look
at
exactly
what
it
looks
like
in
a
yaml
file
in
a
little
bit
in
the
demo,
this
cloud
storage
source-
it
will
describe
that
one
and
over
an
object
gets
stored
in
our
input
bucket.
C
After
that,
we'll
be
setting
up
a
trigger.
The
trigger
will
allow
us
to
route
the
event
from
the
broker
to
the
filter
service.
C
And
right
after
that,
we're
going
to
be
setting
up
some
custom
triggers
for
the
events
that
are
produced
by
these
services.
The
triggers
will
be
configured
so
that
their
resizer,
labeler
and
watermarker
services
consume
the
events
that
they're
interested
in
consuming.
C
C
Okay,
yes,
thank
you,
so
the
first
one
we're
going
to
check
is
we
have
our
cluster
running
and
what
we're
going
to
do
is
we're
going
to
check
that
we
have
our
eventing
pots
running
so
the
bringing
up
of
the
cluster
and
starting
up
of
eventing
takes
a
couple
of
minutes,
so
we
did
that
before
the
demo,
so
that
we
make
this
a
little
bit
faster,
and
what
we
see
here
is
that
we
have
some
controller
and
work
web
hook
running
in
a
couple
of
namespaces
corresponding
to
our
services
for
creative
eventing
and
we're
also
gonna
be
listing
the
services
that
we're
going
to
be
connecting
via
our
example.
C
C
And
the
first
thing
that
we're
going
to
do
now
is
bring
up
our
broker,
so
we're
going
to
use
a
g-clock
command
to
bring
up
our
broker,
but
behind
this
g-clock
command
is
a
siri,
simple
yamo
that
is
just
bringing
up
a
default
broker,
and
here
we're
just
checking
out
the
status
bar
broker.
It's
ready
to
be
used.
C
C
Ago,
our
next
demo
that
we're
going
to
be
applying
is
for
the
filter
trigger
the
first
trigger
that
I
showed
you
so
again.
This
is
a
resource
here
for
a
trigger.
C
So
the
cloud
storage
service
that
we
just
created
a
little
bit
ago
we'll
be
creating
events
of
this
type,
which
will
be
then
routed
by
the
broker
to
our
service
filter.
C
This
is
the
service
that's
in
charge
of
receiving
the
image
that
has
been
filtered,
so
it
is
actually
receiving.
The
filter
here
is
for
that
specific
type
of
events
that
our
filter
server
is
going
to
be
producing
and
it's
pointing
to
the
subscriber
being
the
service
labeler.
C
C
So
here
we
can
see
how,
with
the
same
type
of
event
produced
by
the
filter
service,
we
can
have
two
other
services
downstream
that
are
going
to
be
connected
without
having
any
of
the
knowledge
of
the
other
one
and
the
last.
Let
me
just
go
ahead
and
clear
this
for
a
second.
C
Apologies
for
the
background
noise,
so
this
is
our
last
trigger
this
one
is
going
to
be
filtering
of
events
of
a
type
file
resized,
so
this
corresponds
to
events
produced
by
our
resizing
service,
and
our
subscriber
in
this
case
is
our
water
marker
service.
C
Okay.
So
at
this
point
we
have
all
of
our
services
running
and
we
have
connected
them
effectively
with
our
storage
source
and
our
triggers,
so
we're
going
to
run
a
cube,
ctl
command
to
get
our
storage
sources
and
another
one
to
get
our
triggers
just
to
make
sure
that
everything
is
running
as
expected.
So
we
see
our
storage
source
here
and
it's
ready
and
we
see
our
triggers
that
have
been
set
up
for
our
four
services
that
are
also
ready
and
running.
C
Okay,
so
now
this
should
be
ready
to
actually
execute
our
image
processing
pipeline.
So
at
this
point,
what
we're
gonna
be
doing
is
we're
gonna,
be
copying.
C
Buckets
and
at
this
point
our
pipeline
should
be
processing,
so
one
of
the
features
that
k
native
eventing
has
is
that
it
also
produces
traces,
so
traces
can
be
consumed
by
a
sipkin
or
by
a
stack
driver.
What
we're
seeing
here
is
what
the
traces
look
like
in
stackdriver.
C
C
Let's
go
ahead
and
expand
it,
and
we
can
see
here
the
trace
of
our
event
as
it
passed
through
the
broker,
as
it
then
went
to
our
filter
service.
C
We
can
check
details
of
all
the
time
in
that
interface
as
well.
I'm
not
going
to
go
into
many
more
details
here.
Let's
go
ahead
and
check
out
first,
the
output
of
this
processing,
which
should
be
ready,
so
I'm
just
listing
out
the
files
in
our
image
output
bucket,
and
we
see
that
we
had
three
files
produced
for
the
paris
file
that
I
uploaded
and
three
more
files
for
the
river
file
that
I
uploaded
corresponding
to
the
resize
file,
the
file
with
the
labels,
which
is
the
text
file
and
our
watermark
file.
C
So
we
can
also
go
ahead
and
check
out
here,
for
instance,
one
of
these
label
files
which
got
produced
so
it
has
all
the
labels
identified
from
that
image,
and
we
can
also
check
out
n
cloud
storage.
C
C
So
this
is
the
result
of
our
file
being
resized
and
watermarked.
We,
I
also
have
another
tab
here
with
what
that
initial
file
looked
like.
C
C
Okay.
Let
me
just
skip
here
to
this
last
just
to
wake
up
for
a
second
before
I
turn
it
back
to
brian.
What
we
saw
here
is
this
image
processing
pipeline.
We
had
a
google
kubernetes
engine
running
with
some
services,
some
canadian
services
already
running
all
of
these
services
were
disconnected.
C
We
had
pre-created
our
storage
buckets,
but
these
buckets
did
not
have
any
effect
when
any
action
occurred
there
and
we
created
a
storage
source
and
a
series
of
triggers
to
connect
these
services,
and
we
actually
saw
this
processing
happen,
and
we
also
saw
for
a
couple
of
minutes
there.
What
the
tracing
of
one
of
those
events
looks
like
in
stackdriver
cool.
So
I
think
with
that,
I'm
going
to
stop
sharing
my
string.
My
screen,
I'm
going
to
turn
it
back
to
you.
Brad.
B
Right
so,
let's
go
over
at
this
point,
a
few
concepts
that
you
saw
in
a
in
a
deeper
to
a
deeper
level,
but
first
of
all,
let's
just
review
what
you
saw
here
today.
So
in
that
demo
you
saw
how
you
can
use
k-native
to
set
up
an
event-driven
architecture
in
less
than
15
minutes.
B
What
you
saw
here
was
an
application
with
no
inter-service
communication
where
we
set
up
the
triggers
in
real
time
before
your
eyes,
and
the
result
was
an
application
that
was
fully
decoupled
with
true
separation
of
concern.
Observability
was
easy
to
set
up
and
it
was
easy
to
understand
what
was
happening
with
your
application,
as
you
saw
in
the
tracing
as
an
example,
you'll
notice
that
we
didn't
have
to
update
a
single
line
of
code
or
wire
up
the
application
during
this
process.
B
B
B
B
Sources
are
the
components
that
observes
and
reacts
to
the
fact
in
the
environment
or
application
and
delivers
the
appropriate
event
to
the
broker
in
gcp,
for
example,
and
this
is
what
you
saw
in
in
next
demo-
there
are
many
built-in
sources
related
to
activity
that
happens
in
google
cloud,
such
as
the
source
related
to
the
storage
bucket.
This
is
an
example
of
a
vendor-specific
source,
and
you
didn't
have
to
figure
out
how
to
to
understand
and
observe
the
occurrence
that
source
was
built
in
for
you
in
that
vendor-specific
way.
B
There's
also
a
number
of
community
sources
such
as
github
kafka,
api
server
source
that
responds
to
kubernetes
activity
and
much
more
and
what's
really
exciting
here
is
as
the
community
grows,
and
that
list
of
sources
improves.
We
all
benefit
from
the
rapidly
expanding
way
of
generating
consistent
and
actionable
events.
B
We'll
talk
later
about
how
to
get
involved
with
the
community,
but
producing
your
own
sources
is
a
really
good
way
to
do
that,
especially
if
what
you're,
if
the
problem
you're
solving,
is
likely
to
be
solved
by
others.
If
we
all
work
together
and
produce
that
rich
catalog
of
sources,
everybody
benefits.
B
K-Native
is
designed
to
be
as
native
to
kubernetes
as
possible.
Hence
the
name
k-native
in
that
we
use
the
standard
way
of
extending
kubernetes
the
custom
resource
definition
or
crd
crds
define
new
resource
types
similar
to
classes
in
a
programming
language.
These
are
then
instantiated
to
create
custom
objects
or
cos.
B
Broadly
speaking,
there's
two
high-level
types
of
sources:
there's
push-based
and
pull-based
push-based
sources
are
where
an
upstream
producer
pushes
an
event
directly
into
an
address
that
must
be
exposed
to
the
producer.
So,
for
example,
if
you
have
a
public
url
exposed
and
github
was
to
push
events
into
that
particular
pushed
based
source.
That
source
would
then
convert
it
to
cloud
events.
Format
deliver
it
to
the
appropriate
broker,
whose
trigger
would
then
deliver
it
to
the
appropriate
consumer.
B
But
the
disadvantage,
of
course,
is
that
you
have
to
expose
a
public
url
and
that
that's
not
the
only
type
of
source,
because
for
a
lot
of
enterprises,
that's
not
going
to
be
possible
poll
based
sources
is
the
solution
for
that.
So
in
a
poll
based
source,
there
is
something
always
running
to
pull
for
changes
and
there's
no
need
to
have
network
access
into
the
source.
You
just
have
to
have
network
access
out
from
the
source
to
the
producer,
so
no
endpoint
needs
to
be
exposed
publicly
at
all.
B
There
are
some
complexities
as
far
as
scaling
that
is
handled
depending
on
the
implementation
that
you're
using,
but
these
basically
allow
you
to
keep
your
private
networks
private
and
they
take
care
of
the
pulling
and
then,
of
course,
delivering
to
the
broker
and
then
ultimately,
the
consumer.
B
B
B
B
B
The
notable
differences
here
are
that
the
pub
sub
topics-
sorry,
the
topics-
are
pub
sub
topics
that
handle
the
the
messaging
rather
than
kafka
topics,
but
ultimately
it's
very
similar.
B
B
So
or
sorry
sequences
so
messaging,
I
mentioned
earlier
that
the
channel
that's
an
abstraction
of
a
message
transport
that
takes
care
of
things.
Like
message
persistence,
the
subscription
allows
listening
to
messages
to
a
particular
channel
which
allows
message
delivery.
B
B
Flows
is
another
interesting
concept.
These
allow
sequences
of
events
either
in
sequence
or
parallel
series
are
parallel
rather
for
an
example.
In
a
sequenced
event,
you
could
actually
have
a
single
event
that
represents
the
entire
job.
Yet
multiple
services
may
have
different
roles
in
that
particular
job.
B
B
B
Okay,
so
what's
next,
I'm
sure
that
you've
left
this
presentation
being
really
interested
in
how
canadian
of
eventing
works
and
perhaps
k-native
more
broadly,
you
can
learn
about
more
at
k-native.dev
docs
link,
provided
here.
This
can
contains
everything
you
need
to
know
on
how
to
get
started
with
k-native,
if
you're
a
gcp
customer
and
want
to
do
a
quick,
a
try
of
our
implementation,
which
will
get
you
familiar
with
those
key
concepts
and
you
can
extend
and
play
from
there.
B
Here's
a
link
to
our
quick
start
documentation
which
can
be
found
at
cloud.google.com,
dot,
slash
event,
slash,
anthoc,
quickstart
link
provided
here
and
then
to
get
involved
view
us
on
github
there
you
can
find
out
things
like
like
how
the
this
is
implemented
and
can
and
start
contributing,
which
is
really
exciting.
B
Now,
a
couple
notes
of
some
other
open
source
events
that
are
happening
provided
by
google
open
source
live
on
november
5th.
We
have
go
day
from
9
to
11
pacific
time.
B
In
this
session,
golang
experts
will
share
updates
on
everything
from
go
basics
to
package
discovery
and
editing
tools,
you'll
hear
from
our
partner
khan
academy,
who
will
walk
through
an
interesting
use
case
about
how
the
organization
is
using,
go
to
save
time
and
money
on
december
3rd.
There
is
kubernetes
day.
In
this
event,
kubernetes
experts
at
google
will
cover
the
life
of
a
kubernetes
api
admission
web
hooks
how
apply
works
and
the
distributed
value
store
etcd.
B
So
we
hope
to
see
you
all
in
those
events
as
well,
they're,
surely
to
be
interesting
again.
Thank
you
for
your
time.
This
was
great.
You've
been
a
great
audience,
really
appreciate
it.
If
you
have
questions,
we
want
to
hear
from,
you
feel
free
to
reach
out
to
me
directly,
as
I
would
love
to
engage
with
you
I'd
be
happy
to
answer
any
questions
that
you
that
you
have
about
how
you
could
apply
this
technology.
B
If
you
want
to
understand
more
about
the
examples
and
samples
we
showed
you
today,
we're
happy
to
provide
and,
most
importantly,
I'd
love
to
hear
about
your
use
cases
and
scenarios.
What
are
you
looking
for
k
native
eventing
to
solve?
Do
you
have
a
problem
and
you're
not
quite
sure
how
to
solve
it?
Do
you
have
a
problem
that
we're
not
well
positioned
to
solve?
Yet,
do
you
have
applications
for
this
technology
that
you're
excited
about?
No
matter
what
I
would
love
to
hear
from
you?
C
Read
them
out
loud
and
we
can
go
through
them
if
you
think
that's
a
good
plan.
The
first
question
is
great
demo
of
canadian
inventing
with
gcp
just
to
double
check
that
nothing
in
demo
setup
is
tied
to
specific
gcp
capabilities
and
it's
possible
to
implement
on
other
clouds,
using
upstream
k8
effects.
B
So
let
me
answer
this
first
nick
and
you
can
add
additional
color.
If
I
make
anything
so
yes,
everything
you
saw
here
can
be
can
be
implemented
on
any
on
other
clouds.
In
fact,
most
of
what
nick
showed
today
was
not
using
the
gcpg
cloud
commands.
It
was
using
the
cube
cuddle
commands.
The
only
exception
to
that
is
the
source,
so
the
gcs
bucket
was
a
vendor
provided
source.
B
Now
you
would
have
to
have
an
equivalent
k-native
source
written
for
aws
or
that
you
wrote
yourself
for
querying
another
storage
location,
so
that
would
be
the
only
thing
you'd
have
to
swap
out.
That
was
a
vendor
provided
source,
but
I
could
imagine
that
other
clouds
may
have
something
similar
and
you
could
write
a
source
to
swap
that
in
and
everything
else
is
is
all
using
canadian
primitives.
B
You
saw
the
implementation
we
showed
was
also
using
the
gdcp
broker,
but
nothing
substantive
about
the
demo
you
saw
required
of
the
gcp
broker
the
you
could
be
using
a
a
different
broker
as
well.
There
might
just
be
some
idiots
increasing
synchrony's
about
things
like
the
tracing.
C
Support,
I
don't
have
anything
to
add
to
that.
That's
that's
a
good
answer!
Brian!
Let
me
go
ahead
and
read
the
next
one
from
rash.
Does
canadian
have
any
advantage
over
kafka.
B
So
again,
let
me
start
here
nick
and
then
you
can
add
more.
I
think
I
would
like
to
phrase
your
question
slightly
differently,
because
the
broker
which
which
utilizes
the
message
queue
there
is
a
broker
right.
So
in
that
sense
it
is,
it
can
be
using
kafka
right.
So
what
does
it
provide
to
extend
kafka?
B
And
it's
it's
all
about
having
that
decoupled
system
where
you
didn't
have
to
worry
about
how
to
communicate
with
with
kafka
all
of
the
messages
are
sent
and
translated
via
the
source
to
the
cloud
events,
format
and
then
consumed
with
a
cloud
event
format,
so
that
your
consumer
applications,
your
containers,
only
need
to
know
how
to
use
the
cloud
sdk,
the
marshalling
library,
in
order
to
understand
the
details
of
that
event,
so
it
was
all
somewhat
standardized
and
the
individual
developers
of
services
don't
have
to
have
an
understanding
or
appreciation
for
how
that
message
is
delivered
nearly
as
much
and
that's
a
very
high
level
answer,
but
just
the
the
cole's
notes
there
is
it's
it's
it's
not
about.
B
Does
it
have
advantage
to
kafka
it's?
What
does
it
add?
On
top
of
maintaining
your
own
kafka
implementation,
and
it's
it's
that
we
try
to
maintain
the
the
components
are
maintained
for
you.
C
So
there's
a
al
also
a
couple
of
kafka
related
questions
there
by
managing
jingdong.
The
first
one
is
the
sk
native
replacement
for
other
pop-up
tools,
like
kafka,
sure
service,
rabbit
mq
and
the
other
one,
which
I
think
you
partially
answered
just
a
bit
ago.
Is
it
possible
to
elaborate
on
use
cases
between
using
kafka
and
key
native.
B
B
So
I
would,
I
don't
have
specific
answers
on
that,
one
in
general.
I
think
it
depends
on
which
broker
component
right
so
in
the
gcp
broker
component
it.
I
I
showed
in
the
diagram
that
there
is
a
a
an
error
root
topic.
That's
not
the
technical
name,
but
that's
that's
the
idea
and
there's
also
you
can
utilize
a
dead
letter
cue
to
react
to
failures,
but
I
think
the
idiosyncrasies
are
gonna
depend
on
the
broker
that
you're
using.
B
What
I
would
suggest
is
why
don't
you
reach
out
to
me
offline,
so
brian
zimmerman
brian
with
the
y
at
google.com,
and
actually
it's
if
you're
still
seeing
my
screen.
It's
there
on
the
screen
but
reach
out
to
me,
and
I
will
get
you
a
much
more
well-formed
answer
very
likely
pointing
to
the
documentation
online.
That
will
specifically
answer
your
question.
C
B
It
does
require
some
type
of
messaging
system
right,
so
it
doesn't
have
its
own
messaging
system,
the
channel.
Basically
right
so
there's
a
kafka
channel.
You
can
use
a
pub
sub
channel
and
those
are
implemented
in
the
different
brokers
like
the
gcp
broker
or
the
kafka
broker,
there's
also
an
in-memory
channel
so
that
you
don't
have
to
use
a
different
type
of
channel
which
just
keeps
the
the
events
in
memory.
B
It
is
not
recommended
for
production
use,
but
if
what
you're
getting
at
is
I'm
trying
to
develop
on
my
workstation-
and
I
don't
want
to
stand
up
all
these
extra
components
just
to
get
my
container
written,
that's
where
the
memory
channel
can
help,
but
as
far
as
production
implementations
to
ensure
delivery
guarantees
you
it's
not
recommended
to
use
in-memory,
you
should
use
another
channel
like
like
calf
cut,
pups
of
it
or
etc.
B
That's
one
I'll
have
to
get
back
to
you
on
so
send
me
an
email
brianzamberman
at
google.com,
and
I
will
be
happy
to
to
answer
that
offline.
Unless
nick
you
have
an
answer
off
top
of
your
head.
C
B
I
think
the
in
a
high
level.
The
approach
that
we're
taking
in
gcp
is
that
a
lot
of
things
that
are
happening
in
the
environment
will
generate
events
and
the
sources
that
that
the
vendor-specific
sources
that
we're
providing
are
able
to
pull
from.
From
that
event.
So,
for
instance,
there's
you
know,
I
showed
on
one
of
the
slides,
there's
60
plus
services
that
are
integrated
through
cloud
audit
logs,
so
things
like
bigquery
or
firestore,
etc.
B
And
so
what
you
can
do
is
that
you're
writing
to
a
bigquery
writing
to
a
bigquery
instance,
for
example.
That
can
generate
an
event
that
triggers
your
innate
of
service
in
in
gpa,
for
example.
B
B
C
B
B
I
think
this
is
one
best
help
best
discussed
offline.
If
you
don't
mind
reaching
out,
I
don't
think,
there's
anything
fundamentally
wrong
in
what
you're
saying.
However,
I
think
that
there's
a
lot
of
very
specifics,
there.
B
B
The
scaling,
the
revisions,
the
deployment,
the
management
of
services,
which
running
a
serverless
workflows
on
on
top
of
kubernetes
canadian
eventing,
is
just
the
extension
of
that,
so
in
a
serverless
way,
if
the
only
way
to
execute
an
endpoint
is
just
by
reaching
out
or
execute
the
services
by
reaching
out
to
the
end
point
directly
that
limits
the
decoupling
that
you
can
achieve
here.
Hence
why
canadian
eventing
was
created
as
that
right
hand
to
as
that
right
right
hand,
person
to
a
k
native
serving
so
it
really
just
goes
hand-in-hand.
B
And
I
will
say
I
know
I've
said
this
a
few
times
but
feel
free
to
reach
out
often
it's
hard
to
understand
the
full
nuance
of
a
question
in
the
15
seconds
and
I'd
love
to
give
you
a
lot
more
time
to
discuss
things
openly,
bring
in
the
developer
subject
matter,
experts
as
as
required
so
yeah
definitely
feel
free
to
reach
out
for
follow.
B
Ups
on
any
of
these
questions
or
for
anything
else
that
comes
up
we'd
love
to
hear
from
you
and
thank
you,
everyone
for
joining
us
today,
we're
coming
to
the
end.
So
if
there's
any
other
questions
happy
to
answer
them,
if
not
we'll
hand
things
back
over
to
our
moderator
after
athens
here.
Thank
you
from
the
inmate.