►
From YouTube: Google Open Source Live "Knative day" | Full Event
Description
Watch all previous events here → https://goo.gle/OSSLive
A
Good
morning,
good
afternoon
and
good
evening,
thank
you
for
joining
the
k-native
day
on
google
open
source
live.
This
is
an
awesome
monthly
series
of
sessions
led
by
open
source
experts
from
google
and
community
leaders.
We
really
appreciate
the
ability
to
come
together,
digitally
and
keep
this
community
moving
forward.
A
A
A
A
This
is
an
extremely
exciting
time
for
us
at
google,
as
we
have
recently
released
a
preview
to
our
eventing
product
based
off
the
k-native
technology
we're
talking
about
today.
I
hope
you
enjoy
the
session
and
we
look
forward
to
seeing
with
the
after
party
couple
housekeeping
items.
First,
don't
forget
to
put
your
questions
in
the
live
q,
a
forum
below
the
live
stream
window,
if
you're
viewing
in
full
screen
you'll
need
to
exit
full
screen
to
see
the
live
q,
a
forum.
We
really
look
forward
to
your
questions.
A
Our
sessions
have
been
pre-recorded
to
allow
for
accurate
transcripts,
but
also
so
the
speakers
can
focus
on
answering
your
questions
live
and
then,
of
course,
once
we're
done.
Don't
forget
to
join
us
at
the
after
party
on
google
meet
we'll
share
a
link
to
the
google
me
at
the
end
of
the
last
session.
Without
further
ado,
let's
hear
from
akash
and
chen.
B
B
B
B
A
monolith
is
where
there
is
a
single
application
layer
which
contains
your
presentation,
layer,
your
business
logic
and
the
database
layer
all
integrated
into
a
single
platform.
Take
this
e-commerce
store
as
an
example,
all
its
five
services
run
under
a
single
application
layer,
even
though
this
looks
simple.
There
are
some
inherent
issues
in
it.
B
B
As
application
grows,
we
will
have
more
services
and
more
teams.
It
will
soon
become
overwhelming
for
developers
to
build
and
maintain
the
codebase,
not
just
growing
the
application,
but
even
changing
anything
will
require
developers
to
build
test
the
entire
application,
which
is
going
to
be
nightmare.
B
To
work
on
these
problems,
you
could
adopt
a
micro
services
architecture
and
start
modularizing
the
application
into
small
stand-alone
services
that
could
be
built,
deployed,
scaled
and
maintained
independently.
We
start
by
pulling
two
of
these
services
and
the
database
up
followed
by
few
more
services.
B
Finally,
we
have
all
services
independently
pulled
up
in
reality.
We
will
break
these
services
down
into
more
modular
microservices,
but
for
simplicity
of
this
presentation,
I
have
not
done
so
now
that
each
service
is
independent.
Let's
see
what
are
the
advantages
that
we
get
out
of
this
architecture.
B
First,
microservices
architecture
introduces
a
concept
of
separation
of
concerns.
This
promotes
azure
development.
Each
microservice
has
its
independent
code
base
and
is
built
maintained
and
deployed
independently
by
individual
teams.
Each
team
has
the
flexibility
to
build
the
service
in
the
language
of
their
choice
and
can
also
update
the
text
type
with
each
service
independently.
B
On
top
of
all
of
this,
now
we
can
scale
our
application
and
each
individual
service
independently,
not
just
vertically
but
even
horizontally.
Now.
These
are
all
advantages
of
a
microservices
architecture,
which
is
good,
but
we
also
need
to
know
about
some
of
the
drawbacks
that
are
inherent
in
this
architecture.
C
B
B
B
Now,
here
is
exactly
where
event
driven
microservices
come
to
rescue,
but
before
I
go
into
even
driven
microservices
architecture,
I
just
wanted
to
take
a
step
back
and
take
a
couple
of
minutes
to
go
over
what
exactly
an
event
is
now,
as
a
software
system
operates,
an
occurrence
is
the
capture
for
statement
of
fact,
an
event
is
record.
Expressing
that
occurrence
and
its
context.
B
An
action
is
executed
when
it
is
notified
about
the
occurrence
by
receiving
the
event.
So
in
simple
words,
something
happens,
we
capture
that
something
happened
in
an
event,
so
the
event
has
the
occurrence,
as
well
as
the
context
data,
and
then
one
or
more
actions
are
triggered.
Based
on
that,
it
is
important
to
know
that
events
represent
facts
and
therefore
they
do
not
include
the
destination,
and
the
producer
has
no
expectation.
B
How
the
event
is,
or
will
be
handled
so
now
back
to
even
driven
architecture,
and
even
driven
architecture
has
three
main
components:
first,
they
even
produces.
They
are
exactly
what
the
name
says
they
produce
the
units.
Second
is
an
intermediary
that
receives
an
event
and
routes
it
to
the
next
receiver.
B
B
B
B
B
It
is
a
set
of
composable
primitives
to
enable
late
binding
of
producers
and
consumers,
but
for
an
intermediary
to
work
for
every
kind
of
application.
We
need
to
standardize
on
something
and
that's
where
we
are
standardized
on
the
event
envelope
k
native
uses
cloud
event
as
the
event
envelope
cloud
events
is
a
vendor
neutral
specification
for
defining
the
format
of
the
unit.
It
is
a
cncf
project
in
k
native.
We
call
the
producers
as
event
sources.
B
These
produce
specifically
cloud
events.
This
could
be
your
own
service
producing
a
cloud
event
or
it
could
be
a
special
source,
such
as
a
github
source
or
a
kafka
source
that
can
convert
github
and
kafka
events
into
cloud
events
and
feed
them
into
a
k-native-based
application.
We'll
go
over
some
of
these
details
in
later
steps.
B
B
A
trigger
is
how
services
subscribe
to
the
events
from
a
specific
broker
along
with
intermediary.
We
also
have
events
and
sources
registered
now.
Why
do
we
need
a
registry
think
about
examples
like
client
tools
trying
to
list
all
the
sources
available
on
the
cluster
or,
if
you
want
to
find
out
what
are
the
kind
of
events
available
on
the
cluster?
This
is
where
the
sources
and
events
registry
helps
build.
The
user
experience
around
broker
and
trigger
are
not
the
only
primitives
that
we
have.
B
B
D
Thanks
akash
hi,
I'm
chen,
I'm
going
to
demo
a
sentiment
analysis
application
in
this
demo.
I
will
upload
such
an
image
to
a
google
storage
bucket.
The
application
will
detect
all
the
faces
in
the
image
like
here,
extract
them
out
and
determine
the
emotion
for
them.
We
will
see
the
results
in
applications
web
page
soon
after
I
upload
the
image
here.
I'm
on
my
cloud
storage
bucket
page.
At
the
moment
I
have
no
image
uploaded
in
my
bucket.
D
D
D
Let's
look
behind
the
scene.
The
application
consists
of
these
components.
We
use
the
cloud
storage
bucket
to
store
the
images
uploaded.
We
use
a
firestore
database
to
store
the
final
results.
We
have
five
micro
services
deployed
as
canadian
service
on
a
gke
cluster.
Each
of
them
has
a
different
responsibility.
D
D
In
the
event
to
retrieve
the
image
and
detect
the
faces,
bounding
boxes
for
each
face
in
the
original
image
it
produced,
the
new
event
back
to
the
broker
with
the
bounding
box
coordinates
the
second
trigger
subscribed
to
such
events
and
invoked
the
second
service
which
used
the
coordinates
to
extract
the
face
boxes
as
separate
images
for
each
face
image.
It
produced
a
new
event
with
a
reference
to
the
image.
D
D
D
Now
you
have
seen
how
these
components
are
connected
together.
Let
me
give
you
a
quick
peek
at
what's
really
happening
in
the
cluster
here
in
my
terminal,
I'm
taking
the
logs
from
all
the
services
except
the
one
for
the
ui
they
are
in
order.
This
is
the
first
service
that
detects
all
the
faces.
This
is
the
second
service
that
extract
all
the
face
boxes.
D
D
D
D
D
D
D
Some
are
community
owned,
like
kafka
cloud
source,
github
source,
as
I
just
mentioned,
some
are
vendor-owned
such
as
cloud
storage
source,
as
you
have
seen
in
the
demo
trigger
mesh,
also
supports
aws
sql
source.
Exactly
you
can
even
write
your
own
custom
sources.
For
example,
let's
say
you
have
a
legacy
system
that
generates
events
in
some
legacy
formats.
You
can
write
a
custom
source
that
converts
those
events
into
cloud
events,
format
and
then,
with
trigger
and
broker
you
can
integrate
with
the
rest
of
your
system.
D
D
Broker
and
trigger
are
the
main
advanced
intermediary
in
cognitive
eventing.
You
can
treat
a
broker
as
a
black
box,
where
you
throw
events
into
and
use
trigger
to
subscribe
events
from
it
in
a
trigger.
You
can
specify
filters
using
cloud
events
attributes
so
that
you
only
subscribe
to
events.
You
are
interested
in
the
event
sync
in
a
trigger
could
be
any
addressable.
D
D
D
Here
are
some
lower
level
primitives
in
canadian
eventing
channel
and
subscription
are
the
messaging
primitives.
A
channel
is
an
abstraction
of
a
message
transport
which
takes
care
of
things
like
message,
persistence
and
a
subscription
allows
subscribing
messages
from
a
specific
channel
which
takes
care
of
message
delivery.
D
D
D
They
could
come
in
handy
for
certain
use.
Cases,
for
example,
sequence
makes
it
really
easy
to
build
an
event
augmenting
flow
where,
at
the
beginning
of
the
flow,
you
have
a
bare
minimum
event
and
in
each
service
in
the
flow
you
add
additional
information
to
the
event
and
in
the
end,
you
have
an
event
with
much
richer
information.
D
The
benefit
of
using
a
flow
is
that
you
can
treat
it
as
a
unit
of
work
in
which
any
intermediate
events
won't
be
visible
from
outside.
This
prevents
unexpected
events
flow
into
your
system
and
causing
unexpected
behavior
a
service
in
a
flow
is
essentially
an
addressable
that
could
consume
events
and
potentially
produce
events.
Also
by
this
definition,
a
in
a
flow
could
be
another
flow
or
something
like
a
broker.
D
B
B
B
B
B
If
you
have
any
follow-up
questions,
then
please
reach
out
to
us
on
our
communities,
life
channel
or
feel
free
to
reach
out
directly
to
me
or
chen
on
twitter
or
linkedin.
My
twitter
handle
is
underscore
akash
v
and
linkedin
is
akash.
Rv
and
chen's.
Twitter
handle
is
chan
with
eight
o's
and
the
linkedin
is
cshou.
A
A
F
In
case
you
haven't
attended
the
previous
session.
Let's
start
with
some
brief
context
of
k
native
k
native
is
a
kubernetes
based
platform
to
deploy
and
manage
serverless
workloads.
It's
built
on
top
of
kubernetes,
with
the
idea
of
abstracting
away
complex
kubernetes
details
from
developers
to
enable
them
focus
on
what
matters
most
their
business
logic.
F
It
has
two
core
components:
one
is
k
native
serving
and
the
other
one
is
k
native
event,
k
native
serving
allows
for
rapid
deployment
of
stateless
http
containers.
It
supports
auto
scaling
of
those
containers
from
0
to
n,
and
it
also
deals
with
routing
and
managing
traffic
splits
during
deployment.
For
example,
you
can
deploy
a
new
version
of
your
service
and
you
only
want
10
of
the
traffic
to
go
there.
Then
twenty
percent
and
thirty
and
so
on.
E
Both
candidate
server
and
cognitive
eventing
are
great
on
their
own,
but
they
also
complement
each
other
well.
Using
them
together
means
that
you
can
easily
have
canadian
serving
respond
to
things
other
than
direct
http
requests
and
allows
eventing
to
automatically
scale
up
to
the
peaks
of
demand
and
scaling
down
to
zero
and
no
demand
is
present.
E
F
F
It's
worth
noting
that
cloud
events
provide
several
sdks
in
different
languages,
such
as
python,
go,
javascript,
etc,
and
the
other
advantage
is
portability,
because
your
containers
can
be
moved
to
other
environments.
That
also
are
cloud
events
friendly
cloud
events
have
contact,
attributes
and
data.
You
can
think
of
contact
attributes
as
metadata,
and
the
contact
attributes
are
designed
in
such
a
way
that
they
can
be
serialized
independently
of
of
the
event
data.
This
allows
them
to
inspect
it
without
having
to
deserialize
the
data,
for
example,
to
make
routing
decisions.
F
There
are
also
multiple
protocol
bindings
that
define
how
to
map
an
event
to
a
specific
protocol
message.
For
example,
the
http
protocol
binding
for
cloud
events,
defines
how
client
cloud
events
are
mapped
from
http
requests
and
response
messages
or
the
kafka
protocol
binding,
which
defines
how
you
can
map
events
from
kafka
messages
to
cloud
events
and
so
on.
F
Giving
that
general
background,
we
can
now
focus
on
the
main
topic
of
this
session,
which
is
canadian
sources.
Here's
the
proposed
agenda
for
the
talk.
We
will
start
with
what
are
connective
sources
and
why
we
need
them,
as
well
as
some
of
the
design
choices
we
made
along
the
way.
We
will
also
go
over
some
of
the
sources
that
are
available
out
there
for
you
to
use
and
we'll
also
go
over
more
details
on
how
you
can
implement
your
own
source
if
none
of
the
available
ones
fits
your
purpose
and
we'll
finally
conclude
so.
F
With
k
native
sources,
but
let's
start
with
the
fundamental
question
of
why
we
need
cognitive
sources,
and
the
answer
is
pretty
simple:
canadian
eventing
adopted
the
usage
of
cloud
events.
However,
most
producers
still
use
their
own
event
formats.
So
there's
an
impedance
mismatch.
We
need
k
native
sources
to
be
able
to
leverage
existing
non-cloud
events.
Producers
into
this
cloud
event
world,
and
if
we
want
more
breadth
of
producers,
then
we
need
something
that
will
allow
us
to
convert
event
formats
into
cloud.
F
What
are
k
native
sources,
kennedy
sources
are
constructs
that
produce
or
import
events
into
the
cluster
as
cloud
events,
they
convert
incoming
proprietary
events
into
cloud
events
and
sends
them
downstream
and
downstream
can
be
any
http
addressable.
We
also
call
them
syncs,
for
example,
it
can
be
an
end
consumer,
such
as
a
canadian
service
or
a
middleware
like,
for
example,
the
kennedy
broker.
E
E
K-Nativ
is
designed
to
be
as
native
to
kubernetes
as
possible,
and
certainly
k-native
k-native
uses
the
standard
way
of
extending
kubernetes
the
customer
research
definition
or
crd
crds
define
new
resource
types
similar
to
a
class
in
a
programming
language.
We
then
can
create
instances
of
this
crd
called
a
custom
object
or
co.
These
are
similar
to
instances
in
a
programming
language.
E
If
you
would
like
more
information
about
this,
the
link
below
has
the
official
documentation
about
crds.
Broadly
speaking,
we
have
two
types
of
sources,
push-based
sources
and
poll-based
sources.
Push-Based
sources
are
ones
where
an
upstream
event
producer
pushes
an
event
into
our
source,
such
as
github
source,
where
github
makes
a
http
request
to
our
source.
E
This
has
the
downside
that
we
must
expose
an
endpoint
for
our
source
to
hit.
It
doesn't
necessarily
have
to
be
on
the
public
internet,
but
it
does
have
to
be
somewhere
that
the
producer
can
reach
one
of
the
large
upsides.
Is
these
are
much
easier
to
scale.
We
can
leverage
things
like
k-native,
serving
which
already
does
this
to
scale
up
to
meet,
demand
and
scale
down
to
zero
when
there
isn't
any
poll
based
sources,
on
the
other
hand,
are
pulling
events
from
the
upstream
producer.
E
They
can
either
be
doing
so
on
a
continuous
basis
or
on
a
periodic
basis.
They
need
network
access
to
the
event
producer,
but
they
don't
necessarily
have
to
expose
an
endpoint
themselves
in
general.
These
are
more
difficult
to
scale
because
we
need
our
own
way
to
define
how
congested
it
is.
It
also
means
that
something
is
either
continuously
running
or
at
least
periodically
running.
In
the
background.
E
While
developing
k
native,
we
tried
two
different
crd
models
for
our
sources.
The
first
we
tried
was
called
the
provisioner
model.
We
had
a
single
crd
named
source
inside
that
inside
the
specification
of
that
crd.
There
was
a
field
called
provisioner.
Every
custom
object
that
was
made
would
fill
in
that
field
and
that
field
would
determine
what
kind
of
source
it
was.
It
might
be
github
or
kafka,
or
something
else.
This
made
it
very
easy
for
our
k
native
core
code
to
interact
with
sources
there's
only
one
type,
and
we
knew
it
at
compile
time.
E
This
tended
to
work
pretty
well
as
long
as
the
parameters
for
our
sources
were
very
similar.
As
each
as
the
number
of
sources
increased.
The
parameters
started
to
diverge.
We
had
to
add
a
generic
parameter
map.
This
made
it
very
difficult
for
users
to
understand
what
they
needed
to
configure
or
what
even
could
be
configured
for
a
given
source
type.
It
was
also
more
more
difficult
to
figure
out
what
sources
were
even
available
to
begin
with.
The
next
model
we
tried
was
called
the
operator
model
in
this
model.
E
E
E
F
Although
we
went
with
the
operator
model
where
we
have
different
clds
for
different
sources,
we
also
wanted
to
have
all
sources
share
some
similar
shape.
This
would
allow
us
to
treat
them
polymorphically
in
code
or,
for
instance,
a
cluster
operator
will
be
able
to
inspect
the
resources
without
having
to
be
fully
aware
of
the
implementation,
and
it
will
also
allow
us
to
build
common
libraries
that
could
be
reused
all
over.
F
For
this.
We
use
duct,
typing
duct
typing
in
kubernetes.
World
is
a
technique
that
allows
you
to
define
a
partial
schema
of
an
object
in
the
example
below
we,
we
see
three
json
objects,
the
one
on
the
left
hand,
side
has
a
four
and
a
bar
property.
The
one
in
the
middle
has
a
bus
property
and
the
one
on
the
right
has
a
spam
and
a
hand
property,
but
all
of
them
they
have
this
same
k
native
property.
F
We
have
eventing
and
serving,
and
if
we
want
to
reason
about
that
canadian
property
in
all
of
these
different
resources,
we
can
use
that
typing.
We
can
use
that
typing
to
extract
that
partial
schema
and
that's
exactly
what
we
did
with
our
sources.
Although
we
have
different
cids
and
each
crd
has
specific
attributes,
all
sources
share,
a
partial
schema.
They
all
implement
what
we
call
the
source
duct
type
here,
I'm
showing
three
different
objects,
a
kafka
source,
a
ping
source
and
a
mongodb
source.
F
Although
they
are
different,
you
can
see
that
they
all
have
this
sync
attribute
marked
in
red
and
where
the
sources
can
specify
the
sync
where
they
would
send
the
event
to.
They
also
can
optionally
specify
a
cloud
event
overrides,
which
is
in
dotted
red
rectangles
there,
which
are
attributes
that
are
added
or
overridden
in
the
outbound
events,
although
they
share
these
common
attributes,
the
it's
worth
noting
that
each
of
these
objects
they
find
their
own
specific
attributes,
for
example
the
kafka
one.
You
can
specify
a
consumer
group
in
the
spec
or
the
ping
source.
F
You
can
specify
a
current
schedule,
even
in
the
moment
resource.
You
can
specify
a
database,
for
instance,
having
introduced
the
operator
model
and
the
source.
That
type
will
now
talk
a
little
bit
about
what
it
means
to
be
a
compliant
cognitive
source.
In
order
to
be
a
compliant
cognitive
source.
You
need
to
conform
to
the
source
specification.
F
We
added
a
link
here
for
your
reference.
Roughly
speaking,
the
spec
talks
about
crd
level
requirements
and
custom
object
level
requirements
regarding
the
crd
level
requirements.
We
require
the
crds
for
sources,
have
a
particular
label
specifying
that
they
are
actually
sources.
They
have
to
have
a
sources
category.
They
should
also
have
an
annotation
where
they
can
specify
the.
G
F
Event
they
can
produce
regarding
the
custom
object
requirements.
They
basically
need
to
implement
the
source
that
type
and
it
has
the
spec
dot.
Sync
that
we
talked
about
in
the
previous
slide,
plus
some
status,
condition
that
allows
you
to
specify
whether
the
source
is
ready
or
not,
as
well
as
the
resolve.
Sync
uri,.
E
That
we've
talked
a
bit
about
what
a
source
is:
let's
see
what
sources
are
already
available.
Core
sources
are
the
ones
that
are
maintained
by
the
k
native
eventing
project
itself.
We
have
four
of
them:
api
server
source,
the
source
for
kubernetes
events,
a
resource
within
the
kubernetes
cluster
that
represents
the
creation,
update
or
deletion
of
a
resource
within
the
cluster.
E
In
order
to
create
a
api
server
source,
you
can
specify
the
highlighted
attributes
on
the
slide.
The
service
account
name
is
the
kubernetes
service
account
that
this
will
run
out,
because
this
uses
watches
against
the
kubernetes
api
server.
It
needs
to
have
sufficient
hardback
permissions
to
do
so,
the
mode
either
resource
or
reference
describes
whether
or
not
the
event
itself
should
contain
the
entire
resource,
that's
being
modified
or
whether
just
a
reference
to
that
resource
is
going
to
be
in
the
event.
Finally,
there's
a
list
of
resources
that
are
actually
going
to
be
watched.
E
E
E
E
E
This
allows
you
to
very
quickly
and
easily
get
an
ad
hoc
source
up
and
running
as
we
were
using
sync
binding.
We
realized
that
by
far
the
most
common
usage
was
with
the
deployment
container
source
is
another
meta
source
that
combines
a
sync
binding
with
a
deployment.
By
creating
this
one
resource,
it
will
create
the
a
deployment
with
the
container
image
you
specify
in
an
injected
sync
uri.
This
allows
a
very
quick
way
to
create
ad
hoc
sources.
F
Besides
those
four
core
sources
that
come
out
of
the
box
with
connective
eventing,
there
are
other
sources
that
are
community
owned
and
that
needs
to
be
installed
separately.
We
have
things
like
the
github
source,
which
will
allow
you
to
listen
for
github
events,
for
example.
If
you
want
to
get
a
notification
with
a
pull
request
is
created
in
a
particular
repo
to
trigger
some
ci
cd
pipeline.
You
can
use
this
or
there's
the
kafka
source
that
would
allow
you
to
receive
events.
F
F
F
For
example,
we
have
the
cloud
pub
sap
source
which
will
allow
you
to
receive
events
whenever
you
publish
a
message
into
a
pub
sub
topic,
or
we
also
have
the
cloud
storage
source
that
will
allow
you
to
listen
for
changes
in
gcs
buckets.
For
example,
when
an
image
is
uploaded
in
a
gcs
packet,
you
can
get
an
event
and
trigger
some
computer
vision
pipeline.
Let's
say
to
do
object,
segmentation
or
image
recognition.
F
We
also
have
the
cloud
scheduler
source,
which
will
send
it
an
event
based
on
a
current
schedule,
similar
to
ping
source
and
some
others.
A
trigger
mesh
has
been
collaborating
and
contributing
a
bunch
of
sources,
especially
for
aws
services
such
as
sqs,
dynamodb
and
kinesis,
and
finally,
vmware
has
also
been
contributing.
Some
connective
sources,
such
as.
E
E
This
is
the
kind
of
flowchart
we
think
through
whenever
we
think
about
creating
a
new
source.
The
first
and
most
important
question
is:
do
you
want
to
make
your
own
crd?
This
makes
it
much
easier
for
users
to
use
your
source,
but
also
increases
the
complexity
of
building
everything.
If
you
make
your
own
crd,
you
do
need
to
understand
more
about
kubernetes
and
the
kubernetes
event
model.
E
Regardless
of
whether
you
choose
to
make
your
own
crd,
you
will
need
to
write
your
own
data
plane,
your
your
own
container
image
that
sends
the
events
themselves.
If
you
make
a
crd,
you
also
need
to
write
a
controller
as
well.
This
flowchart
can
help
determine
what
you
should
do.
If
you
want
to
make
your
own
crd,
we
recommend
that
you
clone
our
sample
source
repository
and
modify
it
as
necessary.
E
If
you
don't
want
your
own
crd,
then
can
you
run
your
container
images
of
deployment?
If
so,
then
use
that
container
image
and
use
make
it
container
source.
If
not,
can
you
run
a
container
image
in
any
pod
speckable,
if
so
make
that
pod
speckable
resource
and
make
a
sync
binding
as
well?
If
not,
then,
once
again,
we
recommend
you
clone
our
sample
source
repo
and
modify
sample
sources
are
an
example
of
how
to
build
a
source
crd
along
with
its
controller
web
hook.
E
It
follows
all
of
k,
native's
best
practices
and
is
set
up
to
be
easily
modifiable,
to
get
your
crd
up
and
running
quickly
for
experimentation
or
just
ad
hoc
sources.
We
recommend
container
source
or
sync
binding,
because
of
just
how
quickly
you
can
start,
as
we
mentioned.
If
you
use
make
your
own
crd,
then
you're
going
to
have
to
implement
your
own
control
plan.
The
control
plane
is
everything
involved
with
getting
a
source
ready
to
send
events,
but
not
actually
sending
the
events
itself.
E
It's
generally
built
up
of
two
pieces,
the
controller
and
the
web
hook.
The
controller
will
read
the
spec
of
every
custom
object
and
then
write
out.
The
status
of
every
custom
object
running
in
a
loop
forever,
where
it
observes
the
real
world.
This
thoughts
against
the
desired
state
from
the
custom
object
and
acting
to
make
the
real
world
match
the
desired
state.
E
The
web
hook
will
default,
validate
and
convert
any
of
these
resources
as
well.
The
validation
that
the
web
book
does
is
not
on
the
events
themselves,
rather
on
the
custom
object.
It
can
go
beyond
the
level
of
validation
available
in
crds
by
doing
things
like
validating.
If
two
mutually
exclusive
fields
are
there,
they
are
not
both
present
on
the
custom
object.
At
the
same
time,
the
data
plane
is
everything
involved
with
sending
the
actual
events.
E
We
often
call
the
pod.
That's
sending
the
events
the
receive
adapter.
What
it
does
is.
It
will
receive
an
incoming
event
from
somewhere.
This
could
be
an
external
web
request
in
the
case
of
github
source.
It
could
be
an
in-process
timer
in
the
case
of
ping
source
or
it
could
be
pulling
from
an
external
api
in
the
case
of
api
server
source.
E
E
It
then
sends
this
event
to
the
sync
that
was
passed
in
as
an
environment
variable
and
finally,
if
the
original
event
producer
would
needs
acknowledgements,
the
receive
adapter
will
then
acknowledge.
Upstream.
The
picture
we
have
here
is
of
a
pull
based.
Source
receive
adapter,
is
pulling
changes
from
the
upstream
producer
and
sending
those
as
cloud
events
to
the
sync
cloud.
Events
provides
sdks
in
multiple
languages.
E
Here's
an
example
of
how
we
can
send
an
event
and
go
the
first
thing
we
do.
Is
we
make
a
client
in
this
case
we
want
an
http
client
and
that's
what
the
default
happens
to
be.
Next,
we
make
an
event.
Every
event
requires
a
source,
a
type
and
an
id.
We
specify
the
source
and
type
here
and
allow
the
sdk
to
default
the
id
for
us.
We
also
choose
to
add
a
data
payload
in
this
case
hello
world.
F
With
that
we
have
reached
the
end
of
the
talk,
so,
let's
jump
to
conclusions,
we
hope
you
have
a
clear
understanding
what
canadian
sources
are
in
general.
Eventing
sources
are
a
key
construct
for
any
event
driven
system.
A
canadian
source
is
an
abstraction
that
produces
cloud
events
and
send
them
downstream
to
any
configurable.
Sync.
F
E
If
you're
interested
in
and
want
to
learn
more
about
any
of
the
concepts
we've
talked
about
or
just
excited
to
start
using
canadian
sources,
please
visit
our
public
docs.
If
you
have
any
questions,
please
join
our
slack
channel
and
if
you
just
want
to
explore
our
code
or
our
future
plans,
please
look
at
the
k
native
github
organization.
F
F
A
Thanks
adam
and
nacho
for
anyone
interested
in
contributing
their
own
sources
that
should
get
you
started.
This
solution
is
made
even
more
powerful
with
a
vibrant
community
of
sources
in
the
repository
next
up
grant
and
song
will
take
us
through
the
nitty
gritty
details
of
the
broker
and
the
trigger.
This
is
our
advanced
topic
as
always
put
questions
in
the
live
q,
a
forum
as
you
think
of
them
and
start
getting
pumped
for
the
post
event
party,
handing
it
over
to
you
grant
and
song.
G
C
G
G
G
G
Its
implementation
looked
like
this.
An
ingress
pod
receives
events
from
publishers
and
forwards
them
to
a
channel.
The
dispatcher
pod
receives
every
event
from
the
channel
checks
it
against
each
trigger
filter
and
delivers
any
matches
to
the
relevant
consumers
for
simplicity.
The
ingress
and
dispatcher
pods
only
handle
the
events
for
a
single
broker
and
they
run
in
the
same
namespace
in
which
the
broker
object
was
created.
G
G
G
G
G
The
single
tenancy
of
the
single
tenant
channel
broker
was
also
a
resource
efficiency
challenge,
since
it
didn't
have
scale
to
zero,
auto
scaling.
The
pods
for
ingress
and
dispatcher
were
always
running,
even
if
the
broker
was
idle
for
clusters
with
many
idle
brokers.
This
was
a
lot
of
pods
doing
nothing
but
taking
up
space.
G
Its
implementation
is
similar
to
the
single
tenet
channel
broker,
but
the
ingress
and
dispatcher
workloads
are
shared
by
all
brokers
in
the
cluster
and
they
run
in
the
system.
Namespace,
auto
scaling
to
zero,
isn't
so
critical
anymore,
because
our
minimum
footprint
is
now
two
pods
per
cluster
instead
of
two
pods
per
broker
and
because
all
brokers
share
the
same
infrastructure.
G
G
C
C
Quite
similarly,
each
kennedy
trigger
object
corresponds
to
a
rapid
mq
and
a
dispatcher
recall
that
infinitive
eventing
triggers
express
interest
to
events
by
filtering
on
event
attributes,
and
this
is
implemented
in
rapid
mq,
using
headers
exchange
and
bindings.
First
of
all,
event
attributes
are
translated
to
message.
Headers
a
binding
then
connects
a
queue
for
trigger
to
the
exchange
by
applying
filter
attributes
as
binding
arguments.
C
C
There
is
no
auto
scaling
on
the
ingress.
Currently,
however,
on
the
dispatcher
side,
kita
is
used
to
scale
dispatchers.
Currently,
it
does
not
support
multiple
replicas,
but
it
does
not.
It
does
support
skill
to
zero.
So
if
most
of
your
triggers
are
idle,
you
are
still
fine
in
terms
of
resource
usage.
C
Next,
let
me
walk
you
through
how
event
flow
in
webmq
broker.
The
publisher
sends
an
event
to
the
broker
ingress
endpoint.
The
ingress
translates
the
event
to
a
rapid
mq
message
and
publishes
it
to
the
corresponding
exchange.
A
queue
is
created
for
each
trigger
and
connected
to
the
exchange
via
binding
the
binding
filters.
The
event
based
on
message,
headers
or
event
attributes.
Finally,
if
the
event
passes
the
filter,
the
dispatcher
pulls
it
from
the
queue
and
sends
it
to
the
consumer
in
rapid
mq
qs
can
share
a
single
method
store.
C
Similar
to
the
multi-tenant
channel
broker,
the
kafka
broker
uses
a
multi-tenant
shared
data
plan.
The
ingress
is
sliced
to
serve
multiple
brokers.
By
sharing
a
path
a
kafka
topic
is
created
for
each
broker,
like
a
channel,
is
created
for
its
broker.
In
the
multi-tenant
channel
broker
case,
a
multi-tenant
dispatcher
is
responsible
for
pulling
events
from
kafka,
applying
the
filter
and
delivering
the
events
to
the
consumers.
C
C
C
C
C
C
C
What
motivated
this
design
is
that
the
pasta
pricing
model
in
pubsub
users
are
built
by
the
overall
ingress
and
egress
traffic.
While
there
is
no
storage
cost.
Therefore,
it
is
important
to
avoid
duplicate
methods,
deliveries
from
pub
sub
for
multiple
triggers
in
the
happy
pass
and
while
in
the
ever
pass
when
there
is
no
storage
cost
for
publishing
field
events
to
multiple
reach,
high
topics.
C
C
C
A
broker
cell
can
be
created
and
configured
by
an
operator
upfront
or
a
default
broker
cell
can
be
created
automatically
by
the
control
plane.
The
default
broker
cell
is
only
created
when
there
is
at
least
one
broker
in
this
diagram
broker.
1
and
2
are
assigned
to
the
default
broker
cell,
while
broker
3
is
aligned
to
a
customer
broker
cell,
which
has
an
isolated
data
plan
than
the
default
note
that
the
ability
to
assign
brokers
to
non-default
broker
cell
is
not
fully
implemented
yet,
but
this
is
a
low-hanging
fruit.
C
G
G
G
G
A
Thanks
grant
and
song
that
should
give
you
a
good
understanding
of
the
details
of
this
solution.
I
hope
you
all
found
that
as
interesting
as
I
do
to
learn
more,
you
can
visit
the
k
native
documentation
pages,
which
we
have
linked
in
our
q,
a
section
if
you're
a
gcp
customer.
You
can
check
out
our
recently
released
beta
implementation
of
eventing
events
for
cloud
run
for
anthos.
A
You
can
find
this
at
cloud.google.com
anthos,
slash,
run
and
link
into
the
documentation
from
there.
We
have
our
quick
start
and
a
quick
lab
that
is
linked
to
from
our
blog
post.
That
can
help
you
get
started
with
this
solution
on
gcp
and
then,
as
I
mentioned,
the
k-native
documentation
is
your
best
place
to
learn
more
about
k-native
and
canadian
eventing
in
general.
A
Okay,
so
that's
a
wrap,
but
not
the
end.
Now
it's
time
for
the
after
party
we've
updated
a
google
meet
link
to
join
the
after
party.
Please
look
for
a
button
in
the
agenda
page.
Today's
speakers
will
also
be
joining.
We
will
have
some
quizzes
interactive
activity
and
a
surprise
guest.
I
hope
to
see
you
all
there.
Thank
you
very
much
for
joining
us.