►
From YouTube: Kyma Knative Integration WG meeting 20181213
Description
Meeting Notes: https://docs.google.com/document/d/1BZoJQ5qsSudlix8PXjykQfsJtfwC7EgUy0BP3oORDsA/edit#heading=h.tfc71xo5lpmb
Unfortunately first talk is missing. Notes from it can be found in Meeting Notes.
A
A
But
a
lot
of
other
company
are
trying
now
toward
together
and
also
ACP
strike
to
get
in
this
direction
a
little.
The
abstraction
is
the
definition
of
the
object
model
for
this
abstractions
is
a
process,
and
now
it's
only
a
starting
is
only
the
beginning,
but
is
very
powerful,
because
you
can
write
already
write
application
using
tentative,
eventing
concepts,
objects
and
the
main
objects
are.
First,
the
subscription,
which
is
like
a
normal
subscription
in
Keemun,
but
he
has
something
new
you.
A
This
is
a
very
small
combination
of
these
primitives
so
that
channel
a
subscription
and
get
information
from
a
channel
and
also
can
send
the
result
to
a
channel,
and
the
combination
of
this
can
build
a
flows
and
also
the
subscription
has
a
calling
point
where
service
or
lambda
can
be
at.
The
control
of
this.
Of
the
action
can
imply
the
non
the
processing
of
the
message,
and
then
they
have
defined
to
concept
the
control
plane
of
eventing
and
the
data
plane
in
the
control
plane.
A
You
can
using
this
primitive
to
define
there
now
to
define
very
simple,
delivering
of
object
between
a
source
in
the
service.
The
source
project
even
sourcing
is
now
started.
It
was
started
at
kinetic
and
it
was
very
quickly
and
there
all
of
this
source
is
to
bind
somehow
external
system
to
kinetic
eventing
and
perhaps
in
the
future.
We
will
discuss
about
this
and
also
on
with
a
combination
of
the
primitive
you
can
defined
complex
processing
flow.
For
example,
the
source
could
work
with
the
channel.
The
channel
can
have
multiple
subscription.
A
You
can
define
all
this
stuff
declaratory
and
is
important,
because
one
of
also
for
us,
one
of
our
beta
or
not,
but
a
casaba
dot
on
for
a
professor
discussion
with
custom.
They
were
in
the
city
if
you
are
supporting
akima
such
user
journal,
we
sell
and
if
we
have
tool
to
define
this
in
our
case,
all
these
hard
code
it
in
our
code.
But
this
is
a
very
interesting
thing
and
for
the
data
plane.
A
The
decision
here
is
the
data
pleasure
some
on
work
in
a
namespace
lease,
a
kubernetes
part
which
should
be
on
the
corner
of
the
customer,
and
here
is
the
scalability
part
and
also
the
data
processing
in
the
system.
Part,
for
example,
channel
has
a
receiver
part
which
is
part
of
the
framework
and
can
use.
A
Object
are
part
of
the
data.
Plane
which
are
complete
ease
are,
of
course,
different
from
the
control
plane
as
an
example
for
such
an
object,
if
someone
have
here
define
a
channel
and
channel
is,
has
this
name
in
the
namespace
off?
If
anything
has
a
name
of
course,
also
a
namespace
and
the
most
important
part
he
has
a
provisional
which
allow,
for
example,
if
you
deploy
such
an
see,
are
the
in
Connecticut
and
kinetic
tried
to
notify
the
cluster
channel
provisional
with
the
name
nets
to
build
infrastructure
which
should
react
to
this
channel.
A
The
channel
has
no
information
now
from
subscription,
and
this
subscription
part
will
be
added
to
the
channel
if
someone
deploy
a
subscription,
for
example.
Here
is
the
subscription
see
our
Dean,
which
has
also
a
name,
and
the
target
of
the
subscription
is
the
channel.
This
was
the
previous
channel,
with
the
name
Elohim
and.
A
The
subscription
is
only
the
input
is,
doesn't
have
the
reply,
but
the
reference
of
the
subscribers
conscription
is
a
service.
It's
not
a
Cuban.
A
tea
service
is
a
native
service
with
the
name
hello.
What
is
important
here
here
is
no
difference
between
normal
service.
Like
say
a
micro
service
and
a
lambda
all
is
the
same
in
both
the
serving
part
or
services
are
also
scaling
to
zero.
A
What
why
is
important
for
us?
This
was
a
part
of
the
answer
now.
Inventing
framework
from
genitive
has
multiple
implementation
for
provision
arm
and
if
you
are
looking
into
the
code
from
Kennett,
if
I
can
show
you
a
little
later,
they
have
an
in
memory
channel,
which
is
a
very
simple
channel,
implement.
A
A
We
have
implemented
this
net
streaming
and
also
the
other
colleagues
from
VM
have
implemented
the
provision
for
Kafka,
but
here
is
a
little
complicated
because
Kafka
has
to
run
outside
from
genitive.
In
our
case,
Neff
streaming
is
running
inside
of
K
native
Google.
Pop
pop
sub
is
also
outside
and
Christophe,
and
the
other
colors
are
helping
with
Pio
and
they
are
trying
to
implement.
Is
a
PCP
enterprise
message
service
as
a
additional
provisional
forum
for
eventing
and.
A
Our
kiss
the
first
step,
which
is
will
be
done-
let's
say
yes,
for
this
milestone-
is
to
will
have
chemo
even
bus
working
completely
with
tentative
so
that
it
ups,
it
doesn't
know
any
interfaces
and
any
implementation
of
the
channel.
This
is
a
very
important
thing
because
if
you
write
an
application
native,
an
application
can
work
in
parallel
on
the
same
type
with
multiple
channels.
You
know
in
the
if
some
application,
for
example,
has
a
high
performance
information
for
IOT
or
something
like
that,
for
you
can
use
the
same
time.
A
This
is
an
interesting
thing,
because
the
only
relation
between
an
provisional
energy
and
is
defining
the
channel
definition
is
a
statically
one
and
for
in
the
first
step
we
are
trying
now
to
deploy
kima
even
bus
and
make
him
I
even
was
using
K
native
instead
tour
directly
with
nets
and
all
the
other
part
of
kima
will
remain
the
same
because
they
are
used
in
the
interfaces
exposes
by
akima.
Even
pars
and
then
long
term
was
discussed
with
Ralph.
A
Coding
goal
and
deploy
it
running
very
Lincoln
native-
and
this
is
it's
all
based
on
creation
of
container
based
on
the
import
pass
of
the
go
program
so
that
the
docker
image
is
automatically
created
and
it
works
very
nice
and
it's
very
easy
to
write
to
write
an
application
because
you
can
as
a
developer
for
Kennedy.
If
you
are
focused
only
on
writing
the
code
and
all
the
other
stuff
is
died.
A
I
want
now
to
go
a
little
into
the
cold,
but
only
to
see
he
here
in
a
native
eventing,
where
the
most
important
part
is
the
the
config,
but
where?
Why?
Because
the
if
all
the
inventing
part
is
based
on
K
or
so
that
they
are
coming
with
multiple
provisioners
one
is
nets
all,
and
for
example,
this
is
the
very
simple
one.
They
have
no
persistent
memory
channel
now
ordering
guarantee
and
to
install
this
provisional.
A
C
A
A
C
A
The
project-
and
it's
only
this
designed
to
be
used
by
coal.
What
is
important
is
they
are
analyzing
your
import
from
here
you
go
code
and,
if
you
deploy
now,
an
application
I
can
show
you,
for
example,
in
I,
have
a
simple
application
for
hidden
eventing,
where
I
have
written
a
very
simple
HTTP
service.
A
For
the
subscription-
and
the
most
important
thing
is
the
container:
is
this
one,
which
is
the
pass
for
my
go
code
in
my
project?
You
know
my
project
is
here,
and
here
is
the
password
is.
This
is
all
that
you
have
to
done?
You
will
deploy
this
directly
in
tentative
and
you
do
not
have
to
write,
make
files
and
to
think
about
docker
or
how
you'll
find
your
docker
I'm
trying
now
to
see.
A
And
he
compiled
the
code,
try
to
build
all
the
stuff
using
Google
and
now
always
deploy
it.
And
if
I
here
is
my
portion
of
efficacy
and
is
also
very
elegant
to
work
to
deepen,
to
write
the
code.
Because
this
is
you
have
the
goal
and
only
the
scripters
for
your
application.
We
have
not
to
be
focused
on
the
infrastructure
and
how
to
build
this
and.
A
C
A
D
A
D
Okay,
and
just
when
I
think
of
the
first
step
we
are
trying
to
figure
out,
is
how
to
how
to
do
the
installation
for
the
Kennedy
when
the
nets
and
the
channel,
so
you
want
to
come
up
with
a
kind
of
this
design.
First,
that
what
all
steps
which
we
should
follow
for
that.
What
all
changes
we
need
to
bring
in
into
the
current
kima
installation
to
get
the
kennedy
we
went
in
and
the
kima
venting
on
top
of
that
running.
So
this
is
kind
of
the
first
thing
we
want
to
do
as.
A
Currently,
if
anything
is
already
installed
by
our
installation
for
kima,
but
we
have
to
think
about
also
the
next
rating
path,
which
is
not
result
in
site
as
a
provisional
and
also
after
that,
to
think
about
the
even
pass.
How
can
we
install
either
has
to
be?
Has
this
feature
reflect
to
be
installed
to
it?
Genitive
end
without
a
native,
so.
D
I
can
imagine
that
it
might
be
some
kind
of
ordering
change
that
might
happen,
because
when
you
install
this
across
the
channel
parishioner,
you
might
need
to
install
the
nets
streaming
also
as
a
part
of
that,
so
it
might
need
to
happen
even
before
the
kima
is
getting
installed.
So
to
think
about
all
those
aspects.
D
A
B
Yeah
what
we
already
have
currently
platform
as
a
messaging
service
and
the
past
messaging.
What
you
see
here
is
some
service
instance
I
have
created
just
for
them
more,
for
example,
so
this
service
messaging
service
is
based
on
a
messaging
program.
At
the
moment
this
is
the
RabbitMQ,
but
soon
it
will
be
in
solace
and
for
the
consumer.
Its
activation
doesn't
matter
internally.
If
you
have
a
messaging
broker
and
the
principles
are
what
you
usually
have
their,
so
they
are
first
use,
and
here
I've
created
just
for
use
with
the
dashboard.
B
You
can
see
this
and
we
have
one
subscription
I'm
saying
that
all
messages
with
the
topic-
a
bit
late
there
will
be,
and
whatever
else
and
come
in
they
should
reach
q-test
zero.
One
and
completely
independence
of
our
from
a
native
and
I
can
use
this
now.
So
you
see
there
1000
messages
in
the
queue,
and
if
this
client
now
opens
a
connection,
he
will
receive
these
messages.
Now
they
are
gone.
I
could
avoid
bother,
of
course,
new
messages.
B
As
you
see,
this
is
how
the
messaging
service
in
principle
works,
and
now
the
question
was:
how
good
could
this
match
a
Canadian
venting
approach
and
there's
a
second
seem
to
mention.
We
have
be
working
on
and
on
functions
and
we
have
different
leaders
and
one
triggers
AMQP
and
Twitter
that
we
implemented
this
AMQP
Twitter
doesn't
principle
a
similar
thing.
It
connects
to
the
pro
go
to
a
messaging
service
and
establish
an
aim,
QP
connection,
so
ask
the
client
the
client
could
run
inside
all
hands
inside
the
cluster.
B
It
connects
to
the
broker,
which
is
not
necessarily
also
running
in
the
cluster.
In
our
case,
it's
somewhere
else,
and
then
this
trigger
has
some
rules,
its
receives
messages
and
based
on
attributes
of
the
messages
it
can
select,
functions
to
call
and
if
the
function
returns
data
and
it
was
configured
accordingly,
you
can
it
can
convert
the
result
of
the
function
execution
again
into
a
message
and
send
it
to
some
other
topics.
Usually
one
receives
from
queues
and
sends
topics.
B
This
makes
the
configuration
quite
flexible
and
you
can
own
the
brokest
I
decide
on
the
server
side,
decide
and
what
queues
shall
be
bounced
or
topics
for
good
user.
It
makes
no
difference.
It
just
publish
to
a
certain
topic
and
for
consumers,
of
course
important,
even
if
it's
not
always
continuously
online.
B
Whatever
reasons
the
connection
may
be
closed
for
a
short
time
and
there's
a
persistence
with
secure,
and
so
the
consumer
can
reliable,
receive
messages
for
McCue,
and
only
if
the
function
in
this
case
was
actually
correctly
executed
or
if
we
accept
also
failures.
He
is
configured
to
accept
hours
and
still
acknowledge
the
messages.
And
if
the
client
acknowledge
is
a
message,
then
it
leaves
their
cue.
B
It's
they're
not
available
in
the
Polka
anymore,
and
this
is
a
way
to
have
a
at
least
once
reliability,
as
seen
usually
exactly
once
think
that
these
are
not
that
often
use
in
this
context,
because
then
scaling
becomes
more
complicated,
running
at
least
once
leather.
It's
also
very
simple:
to
have
a
second
consumer
doing
the
same
in
parallel
and
I
think
the
message
will
be
delivered
to
one
of
the
consumers.
B
So
if
I
have
two
consumers
on
the
same
queue
on
a
non-exclusive
queue,
then
they
can
really
broken
parallel
and
a
single
message
goes
to
the
one
or
the
other,
but
not
to
pose
and
broker
has
actually
to
reverse
that,
and
as
far
as
I
see
clients
run,
they
receive
new
messages.
So
this
makes
it
very
simple
to
scale
and
starting
from
this
picture,
yeah
one
can
think
of
using
this
in.
B
In
can
sorry
Inca
native
eventing,
and
for
that
we
would
I
have
not
a
switch
here.
Sorry
I
just
asked,
and
we
be
trying
now
to
do
a
you
doing
a
POC
to
just
verify
that
this
also
really
works.
Do
has
already
done
a
lot
of
things.
I'm
still
working
on
the
dispatcher
here
do
has
setup
already
a
provision
and
implemented
controller
and
the
overall
pictures
still
quite
simple.
B
We
translate
the
Canadian
eventing
into
what
we
have
so
each
channel
that
is
defined
can
be
compared
as
a
topic
and
each
consumer
that
subscribes.
Each
service
that
subscribes
to
this
channel
would
need
an
own
cue
and
could
consume
from
the
pure
messages
in
a
reliable
way
and
could
scale
and
all
that
what
I
already
mentioned.
B
So
this
means
that
this
Petra
here
will
open
AMQP
connection
and
we'll
cordon
the
actually
the
service
via
HTTP
and
small
s,
similar
like
what
we
do
was
to
similar
to
what
we
do
is
the
AMQP
trigger
cousin
same
but
from
principle,
far
away
yeah.
B
Currently
we're
working
with
this
and
I
think
we
soon
see
it
running
and
the
advantage
would
then
be
that
we
have
a
reliable
and
nicely
scalable
solution,
and
even
if
the
enterprise
messaging
services,
then
outside
the
cluster,
it's
the
service
instance
that
needs
to
yeah
to
buy,
to
pay
for
and
if
this
is
outside
the
class.
So
this
is
not
a
problem,
because
the
dispatcher
will
connect
to
the
service
and
we
may
also
connect
with
a
kind
of
receiver
or
other
saw
the
incoming
side.
B
We
could
also
have
a
Pottsville
kind
of
alga
siva
that
sends
messages
to
enterprise
messaging
or
we
could
directly
use
the
HTTP
and
pointed,
and
the
class
messaging
brings
as
well
yeah.
So
that's
the
idea
and
then
we
would
so
to
say
provide
an
additional
alternative
or
an
additional
option
to
realize
channels
for
the
consumer
or
for
the
one
who
configures
a
class
on
it's
then
in
the
best
case,
not
a
big
decision
to
switch
from
from
the
one
to
the
other.
C
Other
question,
because
there
had
there
can
be
multiple
combinations
of
channels
and
receivers
which
reach
from
the
channel
and
how'd
this
dispatcher
dispatches
other
other
chat.
Other
is
their
pod
pair
combination
of
channel
receiver
or
dispatcher
disease
from
single
channel
and
propagated
to
multiple
receivers
or
so.
B
I
think
the
event
in
controller
would
of
course,
start
and
stop
dispatchers
matching
the
current
sea
ADIZ.
So
whenever
a
subscriber
subscribes
to
a
channel
that
is
based
on
enterprise
messaging,
then
the
control
of
it
on
the
one
hand,
side
created
few
calling
the
enterprise
messaging
service
directly
with
the
management
API,
so
creative,
you
bind
it
to
the
topic
of
the
channel
and
then
instantiate
or
star.
The
dispatcher
or
more
of
this
depends
on
only
on
configuration
or
auto.
B
Scaling
will
also
be
possible
and
provided
credentials
to
the
messaging
service,
as
well
as
the
queue
name
to
consume
from
to
the
dispatcher
and,
of
course,
the
endpoint
and
the
service
that
is
consuming
here.
In
this
case
this
in
gray
service
and
behind
this
is
a
function
replace
it
with
was.
It
was
whatever
you
want,
which
is
then
the
subscriber
so
to
say
receiving
data
or
HTTP
inside
the
roster.
So
there's.
B
Quite
flexible,
you
could
employ,
of
course,
one
dispatcher
and
to
care
about
multiple
queues
in
principle,
so
on
AMQP
connection
allows
you
for
sure
to
handle
10
or
20
or
more
queues
depends
on
the
load
depends
on
the
end
of
on
the
configuration.
What
to
decides.
You
could
have
ten
virtual
instances
working
on
one
single
queue
if
you
have
a
very
high
load
or
you
could
have
one
dispatcher
supporting
20
channels.
This
is
as
long
as
we
have
the
same,
and
the
first
message,
in
instance
in
use.
B
D
Just
to
maybe
add
one
small
bit
on
top
of
that
what
Christopher
already
explained
so
currently
the
KETV
eventing.
They
kind
of
leave
it
up
to
the
implementer,
how
they
want
to
kind
of
decide
around
these
aspects,
whether
they
want
to
have
one,
let's
say
a
business
channel
for
corresponding
to
a
channel
in
creative,
or
they
want
to
have
a
section
on
top
of
these.
D
D
A
D
Service
match
for
HTTP
you
should,
and
on
top
of
that,
I
believe
that
it,
if
I
understand
it
correctly,
the
kinetic
venting
all
these
customers
sources
are
extensible,
so
you
can
add
your
own
things
like
basic
authentication
and
some
other
way
of
doing
the
security
and
then
dispatcher,
because
it
is
your
implementation
that
they
can
take
care
of.
That.
B
A
A
D
I'm
just
curious
about
this
cushion
so
I
believe
that
maybe
this
would
all
require
some
kind
of
capability
from
the
service
mesh
to
have,
let's
say,
selective,
HTTP
communication,
for
example,
if
you
haven't
enabled
the
whole
service
mesh
with
HTTP-
and
you
want
to
do
it
selectively-
I
don't
know
if
currently
they
still
support
some
kind.
Something
like
this,
but
maybe
I,
think
something
like
that
also
would
be
required
as
a
kind
of
prerequisite
for
doing
this.
You
want
to
use.