►
From YouTube: Kyma Knative Integration WG meeting 20180117
A
B
So,
first
of
all,
we
discovered
that
to
upgrade
tentative
in
FEMA.
We
need
to
upgrade
first
Cuban
itis
and
it
turned
out
that
upgrading
the
Cuban
at
us
as
such
might
be
a
complex
task
for
now.
So
we
dropped
this
activity
altogether.
For
this
milestone,
so
upgrading
of
Cuban
it
is,
and
upgrading
of
Canada
is
something
that
we
will
take
up
in
the
next
milestone.
B
A
And
with
the
planning,
but
we
have
decided
now,
is
to
go
with
the
old
version
for
can
achieve
in
this
milestone,
to
use
like
the
car
installation
of
kima,
with
serving
zero
to
one
and
the
pre-release
of
eventing
zero,
three,
which
works
at
least
from
a
local
installation
with
k
native
it
works.
We
have
tested
this
without
the
kima
installation,
with
our
new
published
application,
which
is
based
on
cognitive
now,
a.
A
A
C
So
our
our
purpose
was
to
shoot
to
publish
images
of
this
not
streaming
provisioner
controller
and
this
picture
at
first
and
this
is
under
Kennedy
eventing
and
there
is,
you
know,
a
couple
of
ways
to
do
that.
But
we
took
a
look
and
see
that
core
actually
manages
its
pretty
well
pretty
easy
and
what
is
ko
ko
is
just
a
CLI
tool
coming
from
Google.
C
C
So
what
I
mean
by
that
I
can
just
show
you,
for
example,
this
is
a
sample
deployment
file,
so
usually
I
mean
in
the
image
section.
You
need
to
provide.
You
know,
registry
first
and
then
the
image
with
the
proper
version
sake,
but
using
coal.
You
can
just
provide
the
pets,
the
imports,
pets
for
the
application
or
for
the
package,
and
that's
it.
I
am
publishing
this
image,
publishing
a
new
version
of
this
image
and
then
using
it
from
the
deployments.
It's
all
taken
care
of
by
coal
and
how
does
it
happen?
Actually?
C
How
does
it
do
that,
instead
of
using
copper,
tube
CT
I'll
apply?
You
are
just
using
coop
life
and
it's
pretty
simple
to
install
it.
It's
just
a
go
get
command
and
after
that
you
need
to
provide
an
environment
variable
to
the
the
crap.
Oh,
that's
you
want
to
use,
uses
words
and
that's
pretty
much
it
so
you
can
use
it
fits
the
deployments
or
you
can
just
use
it
to
publish
a
new
version
of
the
image.
C
So
that's
what
I've
done
actually
I
have
use
this
core
publish
to
publish
images
from
this
controller,
and
this
feature:
how
did
I
do
it?
I
can
actually
have
a
screenshot
of
it
for
simplicity.
What
I've
done
is
I
first
exported
this
environment
variable
to
our
Kimmo
project
fence
using
un
was
Canadian
Nets
folder
and
after
that,
once
inside
this
Nets
folder
under
Kennedy
hunting,
I,
didn't
add
anything
like
I,
didn't
attacker
file
or
I
didn't
fields
the
image
but
I
built
application.
Instead,
I
just
use
these
commands
call,
publish
PhD.
C
This
t
means
the
take
that
I
would
like
to
use
for
the
image
and
I
provided
the
relative.
But
if
you
are
under
the
carpet,
then
you
can
just
use
the
little
pads
for
the
package
that
you
want
to
build,
and
this
be
just
disables
one
default
behavior
by
default,
it's
the
md5
hash,
two
as
a
suffix
to
the
image
name,
and
we
don't
want
that.
C
C
C
C
A
Just
to
add
the
insight
of
K
native
develop
was
group.
Ko
is
very
much
used
to
change
some
sub
components
of
kinetic
and
is
very
easy
because
you
do
not
have
to
write
make
files
docker.
The
only
thing
is
to
go
and
in
your
directory
and
apply
it
in
them
always
do
all
the
other
stuff
forum.
For,
of
course,
only
for
the
goal
engage.
A
A
A
A
B
So
basically,
the
idea
was
that
in
the
past
few
days,
we
started
figuring
out
how
we
can
adopt
a
native
inside
Canadian
siding
eventing
inside
kima,
so
that
we
can
first
of
all
leverage
what
we
are
getting
from
creative
and,
at
the
same
time,
how
it
can
be
aligned
in
a
way
that
it
still
serves
the
the
core
team
of
functionalities,
which
is
to
extend
the
applications.
Why
are
the
eventing
and
and
wanted
to
see
that?
How
we
can
do
this?
B
B
Basically,
this
is
what
we
want
to
achieve
eventually,
so
we
want
to
achieve
a
state
where
you
have
different
events
coming
inside
Keima
and
as
a
operator
or
some
other
persona,
we
haven't
decided
exactly
who
it
will
be,
but
they
should
be
able
to
somehow
decide
that
which
events
can
use,
which
kind
of
pops
up.
We
want
to
use
something
like
a
local
one,
for
example
nets,
or
they
want
to
use
something
like
a
cloud
pub/sub.
So
the
rationale
behind
this
is
not
all.
B
Events
are
equally
critical
and
equally
need
to
be
reactive
when
it
comes
to
what
someone
wants
to
do
with
the
event.
For
example,
something
like
no
order
created
is
something
maybe
that
will
be
more
critical,
because
there
might
be
a
business
direction
that
should
always
succeed,
but
something
like
analytical
one,
for
example,
the
item
viewed.
Maybe
it
is
not
critical
and
if
it
gets
missed
it,
it
might
not
have
a
big
impact
or
it
doesn't
impact
the
business
aspects
as
much
that
can
be
still
absorbed.
So
this
this.
B
This
gives
advantage
to
to
the
to
the
customer
that
he
can
improve
his
cost
efficiency
when
he
can
basically
distribute
the
load
of
the
events
to
different
absorb
and
then
also
get
the
quality
of
service
and
then
save
money
based
on
how
they
stop,
based
on
what
kind
of
quality
of
service
is
getting
for
different
events
as
compared
to
just
using
one
solution
for
all
the
kind
of
events.
So
this
is
more
like
applicability
option,
but
this
is
like
a
high
level
diagram.
B
How
eventually
we
in
kheema
should
build
our
aspects
like
the
pollution
and
like
the
publish
and
the
subscription
controller
that
should
act
as
the
abstraction
layers
on
top
of
Kennedy
and
then
help
the
customer
arrive
at
a
state
where
he
can
say:
okay,
I
want
to
probably
show
one.
He
went
to
this
pub/sub
and
I
already
spoke
at
the
same
time
also
consume
those
events
from
those
respective
selves.
So
this
kind
of
nicely
describes
the
flow.
B
How
it
should
happen
eventually
guys
can
have
a
look,
then
also
we
have
discussed
or
would
yield
out
different
aspects
of
the
eventing.
The
first
one
is.
We
want
to
provision
a
messaging
solution,
basically
how
we
want
to
provision
a
pop
sub
or
a
proxy
of
a
pop
song,
so
to
say,
for
example,
in
case
of
a
Google
cloud
pub/sub,
you
might
have
some
kind
of
a
provider
that
is
acting
as
a
proxy
I'm
foxy
and
then
sending
the
actual
request
to
the
Google
cloud.
B
But
in
case
of
something
locally
deployed
like
a
cough
cornets,
you
might
have
the
actual
one
running
and
then
this
will
be
redirected
to
the
dispatcher,
which
will
act
as
a
proxy
layer
for
those,
but
that
is
still
acting
in
the
local
one.
So
the
original,
basically
two
main
components,
one
is
a
controller
and
other
is
a
dispatcher.
The
controller
is
the
one
that
is
responsible
for
the
life
cycle
of
those
metadata
objects
that
are
concerned
with
the
eventing.
B
So
this
basically
translates
to
the
topics
or
the
subscriptions
when
you
want
to
publisher
or
consume
the
events,
and
the
dispatcher
is
the
one
that
is
doing
the
runtime
work.
For
example,
if
I
went
is
coming
to
be
stored
in
Kafka,
then
it
will
go
by
a
dispatcher
and
then
eventually
the
coffe
cup
of
some
same
same
point
for
the
consuming.
If
the
event
has
to
be
delivered
to
some
subscriber
who's
willing
to
consume
some
particular
event,
then
it
will
happen
while
a
dispatcher
part
of
that
any
questions.
Anything
so
far.
A
B
It's
basically,
for
example,
rod
who
has
contributed
the
the
controller
and
dispatcher
for
the
nets
and
similarly
for
the
other
pub
subs.
They
are
already
already
provisioners,
which
is
essentially
controller
and
especially,
are
being
later,
but
in
case
there
is
some
different
pop
sub
for
which
it
is
not
yet
written.
Then
maybe
the
team
or
somebody
has
to
contribute.
B
For
example,
the
SME
enterprise
messaging
guys
are
trying
to
write
one
for
the
ems
in
SCP
yeah
and
the
other
thing
or
the
challenge
that
we
has
found
so
far
is
that
we
have
a
something
called
even
types
in
kima
and
we
are
trying
to
figure
out
that
how
they
should
map
to
some
concept
in
k
native
eventing.
So
currently
candidly
renting
has
channels,
but
channel
itself
is
a
slightly.
Let's
call
it
a
heavy
object
and
it
creates
a
lot
of
details
or
a
lot
of
sub
components
around
it.
B
So
currently
we
have
to
map
them
one
to
one,
but
eventually
we
don't
want
to
have
this.
In
the
long
run.
We
want
to
map
it
to
some
kind
of
a
lightweight
solution
so
that
the
channel
doesn't
introduce
a
lot
of
load.
For
example,
if
you
have
a
lot
of
events,
then
you
end
up
having
a
lot
of
channels,
which
means
that
you
might
have
a
lot
of
those
surrounding
is
to
yourself
sees
another
components
that
come
or
that
come
along
side
with
channel.
B
So
this
is
something
that
we
want
to
do
in
the
future,
but
for
now,
since
most
of
the
provisions
that
are
written,
they
are
mapping
the
channel
and
the
even
types
to
one
on
to
one.
For
example,
the
Kafka
topic
is
mapped
to
a
channel.
So
for
now
we
are
following
this
path,
but
that
is
something
that
will
change
in
the
future
when
we
map
our
channel
to
human
type.
This
is
up
in
the
high
level.
B
It
looks
like
so
you
have
a
Fuki,
my
one
type,
and
then
it
will
create
a
fuke,
a
native
channel.
Then
there
will
be
two
services
that
gets
created.
One
is
the
kubernetes
service.
That
is
where
the
requests
can
go
for
the
Food
Channel
and
then
it
is
basically
referred
by
the
these
two
virtual
service.
So
the
actual
redirection
is
happening.
Why
there
is
two
virtual
service,
which
is
redirecting
two.
So
natus
method
is
always
one
for
the
for
the
whole
pops
up,
but
these
two
services
are
created
per
channel.
B
B
If
not,
then
cover
the
publishing
and
the
consumption
part
so
in
the
publishing.
Currently,
we
are
writing
basically
with
the
API
that
contains
the
event
metadata
and
the
payload,
both
in
the
body,
because
this
was
the
first
thing
that
we
did
to
start
with
the
glory
wenting
specification.
But
in
the
later
days
now
they
have
also
come
up
with
the
option
that
you
can
specify
the
metadata
in
the
headers.
B
B
So
this
is
the
whole
flow.
How
the
request
eventually
goes,
so
you
can
see
that
it
goes
to
a
channel
and
then
from
there
channel
is
nothing
but
a
service,
but
then
it
gets
dispersed
to
the
net
dispatcher
and
then
served
by
the
next
board
and
then
eventually
Friesen
s
streaming.
So
this
is
kind
of
the
whole
flow
described
here
now
in
the
consumer
part.
B
Currently,
we
are
using
two
subscriptions
one
subscription
that
we
have
in
the
teaming
level,
which
is
something
that
we
that
the
application
developer
will
define
either
as
a
virtue
of
creating
a
lambda
or
he
will
define
it
by
himself
when
he
wants
to
trigger
a
micro
service.
Why
a
mint!
So
in
the
background
we
will
create
a
tentative
subscription
when
a
team
a
subscription
gets
created.
B
There
will
be
some
additional
business
checks
that
we
will
do
on
top
of
what
we
have
right
now,
just
like
the
event
activation
to
ensure
that
we
provide
the
same
functionality
that
we
were
doing
at
present
in
kheema.
But
again,
once
the
event
activation
is
successful,
then
the
subscription
is
created
and
then
it
refers
to
a
que
native
Channel
and
after
that
the
dispatcher
will
do
its
normal
work
of
realizing
this
subscription
at
the
tentative
level
and
then
delivering
the
events
to
a
server
less
and
which
can
be
my
lambda
or
my
echo
service.
B
So
I
will
yep.
So
there
are
some
open
items
that
we
have
like
how
we
want
to
understand
the
ket
resources
and
how
we
can
align
them
with
the
kima
venting
model,
how
the
events
are
coming
inside.
Currently
they
are
coming
by
the
ingress
and
I
will
I
believe
that
at
the
Kennedy
site
also,
they
want
to
have
some
kind
of
I
ingress
as
a
source.
We
have
this
opportunity
also
to
align
with
them
to
see
if
we
can
propose
something
similar
for
them.
B
B
Yeah,
that's
pretty
much
from
my
site.
I
will
share
one
more
thing
just
to
continue
that.
So
yesterday,
I
shared
the
keema
use
case
easel
with
the
tentative
community
I
send
about
documentation,
because
their
meeting
was
quite
late.
Around
11:30
in
the
night
and
I
will
get
back
to
them
now
and
then
see
that
how
we
can
drive
those
use
cases
along
with
them.
So
we
will
have
more
updates
in
the
coming
days
from
the
kinetic
ins.
B
So
yeah
these
are
the
use
cases.
So
these
are
the
same
use.
Cases
represented
to
the
kennedy
community,
and
this
is
something
that
we
have
our
classical
end-to-end
user
story
for
use
case.
But
then
we
have
another
use
cases
which
we
are
not
realizing
right
now,
but
I
believe
that
at
some
point
we
will
go
into
selection,
where
we
can
use
the
eventing
for
achieving
the
asynchronous
workflows.
At
the
same
time,
we
can
also
use
the
the
tentative
aspects
to
integrate
with
a
third
party
matching
systems
both
for
the
publishing
and
the
consumption
parts.
B
So
essentially,
with
these
two
use
cases,
what
we
want
to
do
is
any
anything
that
is
happening
in
the
application.
We
can
reflect
that
in
a
cloud
ecosystem.
At
the
same
time,
if
there
is
something
that
is
happening
in
a
cloud
ecosystem,
we
can
listen
to
that
and
we
can
then
consume
and
then
take
action
in
application.
So
this
is
a
symmetry
of
that,
but
eventually
we
are
integrating
a
cloud
system
with
the
application.
Why
are
the
eventing.