►
From YouTube: Kyma Core SIG community meeting 20190130
Description
Meeting notes: https://docs.google.com/document/d/1vWleTon7sJIk0teee4SoVezS4mR3K8TlkvXkgLJwRD8/edit#heading=h.p2ukjuifn4qw
A
B
Okay
guys,
so
we
recording
it's
one
minute
past
3:00,
if
somebody's
late,
he
can
always
watch
the
recording.
So
welcome
on
our
awesome,
Korsak
meeting
I'm
your
today's
host
moderator
and
piata
cope.
Schinsky
is
my
helper.
If
it
comes
to
notes
taking.
Thank
you
very
much.
Okay
for
helping
me
and
before
we
start
talking
about
the
agenda
can
reminder
these
are
due
to
the
attendees
list.
The
link
to
the
agenda
is
in
the
slack.
Channel.
I
can
also
add
it
in
the
chat
here
in
zoom.
B
Just
let
me
know
if
you
don't
have
access
to
slack,
which
would
be
strange,
then
yeah,
let's
go
back
to
the
agenda.
So,
first
of
all
one
top
first
topic,
we're
gonna,
show
it's
gonna
be
pure
he's
gonna,
give
you
some
preview
of
what
we
were
preparing
for
0-7
release,
which
is
right
behind
the
corner.
It's
about
the
enablement
of
a
additional
new
broker
in
the
catalog
that
was
usable
now,
but
it's
it
was
only
usable
during
the
provisioning.
B
Now
it's
really
available
for
run
time
set
up
and
he's
gonna
tell
you
about
the
details,
how
it
looks
like
now
and
then
I'm
gonna
quickly
talk
about
kima
CLI,
but
really
not
much.
Just
point
you
out
to
the
release:
we'll
have
a
usual
overview
also
of
the
proposals,
just
to
remind
you
to
take
a
look
on
them.
B
C
Then
I
will
be
the
first
one
that
will
interrupt
you
thanks
for
cash
and
okay,
then
other
service
broker
at
run
time.
What
does
it
mean?
I,
don't
know
if
you
have
watched
the
previous
Sikh
core
meetings,
but
I
already
shown
you
how
we
enabled
Google
cloud
platform
broker
as
a
bundle
that
and
we
did
the
same
relief
or
the
other
service
broker
and
I
will
start
with
our
bundles
repository.
It's
the
repository
under
the
Kimmel
project
organization
github,
and
we
released
the
new
repository
with
bundles.
C
That's
the
version
and,
as
you
see,
the
new
author
service
broker,
bundle
is
released
and
that
version
of
bundles
it's
already
in
keema
master
and
will
be
fully
supported
in
about
7
release
of
kima.
Then
that's
the
place
you
can
search
for
all
of
the
bundles.
We
already
supporting
and
also
brokers,
are
here
here
then
I
will
just
go
to
the
keema
instance
and,
of
course,
right
now
the
one
broker
which
is
installed
during
the
keema
installation
is
the
helm
broker
and
that
guy
really
leverages
the
definitions
from
bundles
repository.
C
Of
course,
the
home
broker
itself
could
be
configured
to
other
repositories.
That
means
the
came
up
when
the
repository,
let's
say,
is
the
default
one.
You
can
read
more
in
the
bundles
repository
how
to
create
your
own
and
expose
the
bundles
in
the
keema
service.
Catalog
you
why,
okay
now,
then
I
will
just
show
you
the
documentation.
C
C
How
it
is
displayed
in
the
Service
Catalog
Qi,
as
you
see
it's
just
a
class
that
could
be
provisioned
and
when
it
will
be
done,
the
Service
Catalog
offering
will
be
extended
with
other
service
classics.
Okay,
then
we
can
boil
it
back
to
a
namespace
view
this
time
and
I
will
go
to
one
of
the
name
space,
let's
say
QA
in
the
name:
space
brokers.
You
see
nothing
here.
That
means
no
name.
Space
brokers
are
registered,
but
if
you
will
go
to
the
catalog,
we
see
those
free
main
bundles.
C
We
release
in
ODOT,
free
and
the
other
service
broker.
Is
anyone-
and
you
can
see
that
also
it's
marked
as
a
local?
That
means
some
resources
are
installed
into
the
kubernetes
cluster
itself
and
mainly
it's
the
broker
deployment.
Then,
if
we
go
to
the
details
of
that
class
in
the
overview,
we
have
documentation.
C
First
of
all,
big
note
that
to
provision
that
class,
we
need
to
provide
a
secret
with
credentials
to
Azure
and
the
below
steps
really
describes
what
you
need
to
do
to
acquire
that
secret
and
at
the
end,
it's
just
a
matter
of
creation
of
that
secret.
In
the
given
name
space,
that
means,
in
that
case
in
the
QA
and,
of
course,
that
we
also
mentioned
that
that
broker
and
that
class
could
be
installed
just
once
in
the
given
name
space,
because
it
doesn't
make
sense
free
tool
to
install
it
twice.
C
Then,
if
we
will
go
to
details,
you
can
also
read
about
input
parameters
for
that
class.
Also,
there
is
one
it
is
one
input
parameter
Li
that
says
what
offering
we
want
to
shown
in
the
Service
Catalog,
and
this
table
has
just
a
couple
of
the
classes.
Then
we
have
a
preview
which
is
the
candidates
to
be
stable
service
classes,
and
there
is
a
big
bunch
of
classes
that
are
called
experimental,
but
the
early
they
have
really
nice
functionality,
then,
and
of
course,
that
class
is
by
enable
false.
C
That
means
we
are
not
able
to
get
a
service
bindings
from
the
class
because
it's
not
supported,
then,
let's
just
provision
and
that
guy
and
let's
say
I
will
pick
up
this
table.
Of
course,
I
already
I
already
provided
the
secret
to
that
namespace,
which
has
exactly
such
a
name
as
your
broker
data,
and
without
that,
of
course
the
class
will
fail
during
the
provisioning.
Then,
let's,
let's
pick
up
this
table
and
it's
in
the
provisioning
phase,
we
can
see
in
the
deployments
and
pause
that's
or
not,
and
there
is
some
issue
with
caching.
C
Ok
imports.
We
see
that
some
resources
are
right
now
deployed
also
the
broker
itself
is
starting
to
be
registered
and
when
the
instance
will
be
said
that
it's
provisioned,
we
will
see
that
catalog
will
be
extended.
I
will
back
here
in
a
minute.
I
will
just
go
to
the
production
namespace
where,
where
I
already
provisioned
our
Jew
broker
in
the
experimental
mode,
and
as
you
can
see,
it's
quite
a
lot
of
classes
that
we
can
pick
up
and
some
of
the
interesting
one
from
the
experimenter.
C
For
example,
there
is
really
nice
text
analytics
class
where
we
can
resend
a
string
with
some
sentences
in
any
language,
and
it
can
give
us,
for
example,
if
you
first
of
all
assign
the
language
with
some
scores
and
also
other
information
that
it's
positive
sentence
or
not,
and
so
on.
Three
nice
features
and
many
other
classes
I
also
provision
the
other
ready
cash.
C
I
will
show
in
a
minute,
and
you
can
see
that
also
all
of
the
classes
we
added
additional
documentation
describing
every
and
every
parameter
that
could
be
provided
for
the
given
plans
and,
what's
the
most
important,
we
also
document
what
will
be
the
credentials
when
you
will
ask
for
the
service
binding.
Let's
go
to
them
instances.
C
Ok
still
I
already
sketch
still
is
in
provisioning
phase.
It
takes
some
time,
then
I
will
be
not
able
to
create
the
credential
right
now,
but
I
can,
in
the
meantime,
I
will
show
you
doc.
That
text
analytics
should
be
quite
fast.
It's
also
nice
that
there
is
a
free
tier
up
to
five
thousand
calls
for
free.
C
Face
I
will
just
back
to
the
QA,
maybe
already
broker,
with
only
the
stable
classes
is
already
provisioned.
Yes,
it
is,
then
we
can
go
to
the
brokers.
We
see
that
it's
registered
with
its
it's
running
and
in
the
catalog
you
see
it's
much
less
classes
than
in
the
production
environment
and
it's
up
to
the
administrator
li.
What
kind
of
secret
and
credentials
will
be
provided
into
the
cluster
in
that
case,
I
just
reused
the
same
secret
for
tuning
spaces,
but
it
could
be
then
a
different
credentials.
C
B
Ok
and
Julio,
so
the
the
first
topic
on
the
list
and
the
key
ma
CLI
I
just
wanted
to
give
you
some
hints
in
this
topic,
because
we
showed
kima
CLI
some
time
ago
on
one
of
our
core
sig
meetings,
its
incubating
projects.
So
there
are
no,
it's
not
really.
The
release
cycle
is
not
bundles
kima
release
cycle,
but
we
know
it's.
Many
people
are
interested
with
this
topic,
so
just
a
reminder
that
we
just
released
and
another
version
of
the
CLI.
B
You
might
want
to
check
it
and
if
you
could
click
on
the
link,
ok
and,
like
I
said
the
releases
are
done
independently
from
kima.
So
if
you're
also
interested,
like
others
with
this
incubating
project,
just
remember
to
reuse
the
the
Google
feature
we
have,
would
you
click
on
the
watch
button?
Okay
at
the
top?
So
just
if
you're
really
interested
with
this
project,
just
click
on
watch
releases
only
and
ya
will
not
have
to
remind
you
every
time
we
have
a
new
release
here.
B
B
So,
just
to
point
out,
we
have
three
proposals
waiting
for
your
feedback
at
the
moment,
the
two
first
ones,
the
components,
folder
consolidation
and
test
folder
consolidation
is
related
to
our
continuous
work
on
the
improvement
of
the
contribution
experience,
so
we
basically
notice
that
we
are
not
really
consistent
if
it
comes
to
the
naming
of
the
components
it's
hard
to
really
differentiate
without
reading
reading
me
to
know
which
component
is
what
really,
which
is
a
controller
which
is
a
service,
etc.
It
doesn't
look
good,
so
we
want
to
improve
that.
B
B
So,
as
you
probably
know,
one
of
our
logging
components
is
based
on
okay
look,
but
the
okay,
lock
project
is
set
to
archive,
which
means
we
would
like
to
yes
decide
how
much
to
do
with.
Okay,
look
in
the
future
and
there's
a
proposal
that
you
would
replace
it
with
diesel.
Forking,
okay,
look
and
continue
the
project,
there's
also
some
other
project
to
which
we
could
switch
and
that's
the
proposal
about.
B
So,
if
you're
interested
with
this
topic,
please
take
a
look
on
the
proposal
and
that's
it
from
my
side
from
these
topics
and
the
next
one
is
community
slot
and
the
working
group
about
K
native
and
kima
integration.
Gareth,
please
yeah
continue.
If
you
have
to
share
the
screen,
then
let
us
know
then.
A
A
The
need
for
these
arises
that,
because
we've
already
aligning
with
the
k
native
event
in
community
and
as
a
part
of
the
discussions,
there
was
a
need
that
most
of
the
the
contributors
and
the
organization,
the
stakeholders
that
are
interested
in
moving
the
kennedy
eventing
forward,
they
should
bring
their
use
cases
so
that,
at
the
connective
level,
they
can
identify
a
set
of
minimal,
minimal,
primitive
use.
Cases
which
can
be
built
in
kinetic
venting
itself,
and
then
the
other
stakeholders
or
other
organizations
can
derive
their
use.
Cases
based
on
what
can
ATV
venting
is
provided.
A
So
I
will
quickly
go
through
what
in
key
my
venting
as
a
use
case.
We
have
at
present
so
starting
with
the
first
one.
We
have
the
traditional
use
case
of
extensibility
and
integration.
We
want
to
be
able
to
extend
any
external
solution.
It
can
be
lexy
application
or
a
third
party
system
that
is
wants
to
do
some
new
announcements
and
then
why
are
the
events
we
want
to
do
those
enhancements
and
extensions
in
the
in
the
kima
itself?
A
So
this
is
classical
that
we
have
the
events
coming
via
the
API
gateway
and
then
going
to
the
event.
Bus
and,
from
the
event,
was
things
a
service
that
is
able
to
consume
those
events
and
then,
by
consuming
those
events,
its
able
to
extend
the
functionalities
of
the
existing
features
of
application
and,
at
the
same
time,
since
we
are
building
on
top
of
Kennedy
and
Kennedy,
provides
this
nice
concept
of
plug
ability
when
it
comes
to
the
pubsub
solutions,
so
that
even
bus
itself
can
be
built
on
top
of
more
than
one
pub/sub.
A
One
can
be
a
local
pub
sub.
That
is
like
a
batteries
included
approach,
a
minimal
lightweight
solution,
but
at
the
same
time
there
can
be
another
cluster
channel,
provisional
that
can
be
talking
to
a
cloud
pub/sub
and
then
utilizing
those
features.
So
this
is
kind
of
quite
nice.
You
have
plug
ability
and,
at
the
same
time,
the
possibility
of
using
more
than
one
absorb
as
a
backing
service
for
the
even
person
Keima.
So
this
is
the
first
use
case.
I'll
then
go
to
the
second
one.
This
is
something
that
we
have
a
vision.
A
We
don't
have
at
it
this
in
place
right
no,
but
this
is
something
that
we'll
plan
to
have
at
some
point.
This
is
more
about
how
we
can
have
these
Aysen
cross
workflows.
You
find
at
a
high
level
that
can
be
used
to
achieve
some
business
aspects,
so
this
can
be
triggered
by
external,
or
it
can
also
be
an
internal
inside
the
kimmells,
so
it
doesn't
matter
how
it
gets
triggered.
So
it's
all
about
that
as
a
user
as
a
kheema
developer.
A
At
some
point,
you
should
be
able
to
define
some
set
of
steps
that
you
want
to
do
in
a
disputed
fashion
in
kheema,
and
then
why
passing
the
events?
Why
the
event
bus
you
should
be
able
to
achieve
that
pipeline
or
else
workflow
features
again.
It
can
be
backed
by
more
than
one
up
stop.
So
it's
all
kind
of
really
pluggable
and
configurable.
A
Going
to
the
the
third
one
is
quite
interesting
as
if
it's
more
about
integrating
with
a
third
party
matching
system
and
by
integrating
with
the
third
party
messaging
systems,
we
can
also
consider
integrating
with
the
cloud
ecosystem
of
that
particular
massing
system.
For
example,
we
can
have
a
event
coming
from
an
external
application,
and
then
we
can
send
this
event
to
work
out
pops
up.
A
We
should
be
able
to
trigger
some
actions
in
that
cloud:
ecosystem,
for
example,
creating
a
user
in
a
Google
cloud,
pub/sub
or
Google
cloud
ecosystem
or
creating
some
metadata
object
that
is
required
for
achieving
some
business
scenarios.
The
other
use
case
that
we
have
is
again
a
mirror
replica
for
this
one,
but
in
this
case
we
capture
some
events
that
happen
in
ecosystem.
You
know
in
a
cloud
providers
system,
for
example,
GCP,
and
we
use
those
events
and
then
to
do
something
in
the
external
application.
So
it's
all
about.
A
So
this
is
the
first
item.
I
have
and
I
will
now
go
to
the
second
part,
which
is
more
about
having
a
vision
for
the
key
native
adoption,
how
we
plan
to
move
further
with
adopting
creative
in
kheema
and
then
how
does
the
creative
eventing
will
help
for
the
key,
my
venting
move
forward
and
what
is
how
it
will
look
like
so
going
further.
A
This
is
kind
of
the
high-level
view
how
we
see
the
things
can
evolve
in
the
future,
so
we
can
have
more
than
one
event
coming
in
to
Kemah,
and
then
we
can
have
more
than
one
observe
configured
by
virtue
of
using
Kennedy,
and
it
should
be
possible
that
we
can
specify
for
a
particular
event
to
which
pubs
or
it
can
be
stored.
So
this
has
interesting
ramifications.
A
So
one
of
the
interesting
aspects
is
that
this
helps
the
customer,
based
on
the
criticality
of
the
events,
achieve
the
cost
benefits,
for
example,
for
some
events
that
are
highly
business
critical.
They
can
always
go
to
a
pub
sub
that
is
provided
by
a
cloud
provider,
for
example
Google
cloud
pub/sub
or
aks,
or
your
and
at
the
same
time
there
are
some
events
that
are
more
for
a
kind
of
fire-and-forget
approach,
and
then
they
are
not
highly
critical,
so
they
can
be.
A
They
can
also
be
stored
in
some
kind
of
a
local
web
server
or
something
that
is
not
so
costly
as
compared
to
going
for
a
Google
or
some
other
cloud
pub/sub.
At
the
same
time,
the
same
goes
for
the
consumption
part.
You
can
also
have
multiple
cloud
pub/sub,
backed
by
Kay
native
eventing,
and
then
we
care
the
possibility
to
consume
the
events
from
there.
So
this
is
a
briefly
high-level
flow
that
displays
that
how
two
events
are
flowing
and
then
how
their
consumption
and
storing
is
happening.
A
This
thing,
and
this
as
we
can
of
course,
decouple
this
into
multiple
concepts
that
together
compose
the
event
bus,
so
the
first
one
is
how
we
are
going
to
provision
a
messaging
solution.
So
at
present
this
is
left
open
because
we
want
to
be
here.
We
want
to
be
ensure
that
anyone
who
has
the
kinetic
eventing
abstractions
implemented
they
can
come
and
enable
their
pops
up
inside
keema.
So
there
is
kind
of
no
one
particular
way
of
doing
it.
A
A
So
the
main
thing
that
we
are
currently
working
out
is
how
we
want
to
integrate
the
keema
event
types
with
all
those
aspects
of
kinetic
venting,
and
we
have
one
idea
floating
out
where
we
want
to
have
one
event,
type
of
keema
aligned
with
the
connective
channel.
So
currently,
this
is
based
on
the
reasoning
that
most
of
the
implementation
is
that
are
done
in
the
connective
community.
They
are
following
the
same
path,
but
this
is
a
slightly
heavy
object.
When
we
talk
about
the
kettle
channel,
this
creates
a
lot
of
services
behind
the
scenes.
A
For
example,
it
creates
one
is
two
virtual
service
and
also
it
creates
a
cumulative
service.
At
some
point,
the
kinetic
eventing
might
really
look
at
this
approach
and
they
might
try
to
make
it
more
lightweight
or
change
it
in
a
way
that
it
is
not
consuming
too
many
resources.
So
this
is
something
that
we
might
come
back
and
then
fix
it
along
with
the
community,
but
for
now
we
are
following
the
approach
of
having
one
channel
and
one
even
type
mapped
as
one
to
one
yeah
so
going
further.
A
So
there
are
a
lot
of
details
how
the
things
can
happen
in
the
future,
but
the
main
thing
that
I
want
to
share
here
is
that
we
had
some
discussions.
How
we
wanted
to
choose
a
particular
pops
up
for
an
event
type,
because
we
want
to
give
the
customer
and
eventually
the
operator
who
is
operating
kima
this
possibility
that
they
can
decide
which
pub/sub
they
want
a
particular
event
to
go.
So
we
thought
that
maybe
we
should
start
with
something
like
that
default
pop
serve.
A
For
now
that
can
be
NAT
streaming
and
at
some
point
we
can
provide
a
pops
up
per
application.
So
this
is
interesting
because
you
have
application
and
then
you
already
are
in
a
cloud
system,
so
you
might
want
to
prefer
it
for
using
it
for
that
particular
cloud
based
pops
up
as
compared
to
something
else.
So
when
you
configure
application,
you
should
be
able
to
specify
okay
I
want
to
use
this
particular
pops
up
for
this
application.
A
Sorry
for
that,
so
this
is
where
we
have
a
proposed
design
and
then
we
will
Slough
we'll
take
it
forward
from
there
and
again
we
also
reached
one
important
decision.
We
thought
about
how
we
should
create
a
channel,
because
when
we
create
a
channel
infinitive,
this
implies
that
we
are
creating
some
backing
resources
in
the
pub
self.
So
one
one
design
aspect
that
came
into
picture
was
that
kima
is
all
about
consuming
the
event
and
then
triggering
some
compute
or
some
extensions
by
virtue
of
that.
A
So
it
will
make
perfect
sense
if
we
start
only
creating
those
resources
when
there
is
an
interest
in
consuming
those
events.
So
whenever
a
subscription
is
created,
we
will
ensure
that
only
then
we
create
the
packing
resources,
including
the
Kenedy
channel,
that
is
required
for
getting
the
events
triggered.
So
so
this
works
quite
nicely.
Design
is
quite
elegant
in
that
way,
so
there
is
only
one
challenge
where
we
also
need
to
cater
to
the
use
case
that
we
are
sending
the
events
to
a
third
party
matching
system.
A
So
in
that
case,
this
is
one
exceptional
scenario
and
we
need
to
give
it
some
thought
and
then
come
up
with
the
idea
that
maybe
we
should
have
some
operator
intervention
where
you
can
manually
configure
that
we
should
create
the
channels
of
friends
so
that
they
can
be
sent
to
the
external
pops
up
without
even
having
a
subscription
in
tema.
So
this
is
one
thing
that
we
might
come
up
as
a
design.
B
If
not
just
remember,
we
have
open
slack
work
space,
so
you
can
join
exclusive,
eventing
channel
or
also
the
working
group
can
gain
it
working
group
channel.
If
you
would
like
to
follow
up
with
some
questions,
if
you're
too
shy
to
ask
them
now,
which
is
totally
fine,
okay,
so
the
last
five
minutes
for
questions
and
feedback
is
there
anybody
who
would
like
to.
A
Say
something
so
just
I
recall:
I,
don't
have
a
question
but
I
think
one
comment.
So
we
have
already
started
with
upgrading
the
mini
cube
version
and
the
Kuban.
It
is
version
4,
kima
and
those
peers
are
already
most
and-
and
we
are
already
keeping
an
eye
on
the
stability
of
that.
So
this
will
be
the
next
final
set.
So
currently
we
have
the
version.
4
mini
cube
is
0.3
3.0
and
then
the
cube
rates
version
is
1.11.