►
From YouTube: Kyma Prow Migration WG meeting 20181019
Description
Meeting notes: https://docs.google.com/document/d/1ljEAoCBJXlxx_ATPyvKZ1KoyFOSIBzEAOkN-2H-HhUY/
A
A
A
So,
let's
go
quickly
through
our
agenda;
yes,
so,
as
usually
are
going
to
review
quickly
action
items,
maybe
not
so
crooked
review
action
items
from
the
previous
meetings
meeting
another
point
we'll
discuss,
it
will
be
improvements
in
crown
installation
and
configuration,
and
this
point
will
be
held
by
Adam
migration
of
crema
components
to
prowl
concerns
and
discussion
by
me.
How
and
the
last
one
backlog
refinement
by
Adam
okay.
A
B
A
B
Solomon,
yes,
let
me
continue.
There
is
actually
almost
finished
I'm
just
going
to
create
a
PR
quickly
and
let
me
maybe
quickly
show
you
what
I
have
done
well,
basically,
I
have
no
secrets.
This
one
is
the
script.
That's
the
prowl
job
will
run
actually
what
it's
doing
is.
First,
it's
authenticating
to
Google
cloud
using
a
service
accounts
that
I
created
before
and
then
on
the
fly.
It's
just
creating
a
new
instance
with
the
size
that
we
need
for
our
integration
tests
and
later
on,
it's
copying
it
script.
B
That's
I
also
wrote
over
to
the
newly
created
VM
and
then
it's
running
the
script.
This
part
is
going
to
be
run
by
the
prowl
job
and
what
this
script
does
on
the
way.
M,
let
me
show
that's
to
you
as
well.
Well,
first,
it
starts
with
installing
the
necessary
to
links
on
the
VM.
These
two
links
includes
like
Tucker
to
CTL,
helm
and
mini
cube,
and
then
it
starts
a
mini
cube
cluster
with
the
option
we
on
driver
none.
B
Let
me
show
you
the
last
one
I
run.
Actually
it
was
a
small
range
and
yeah,
as
you
might
see
here.
First,
it's
just
creating
your
instance
on
the
fly.
Then
it's
performing
all
this
installation
stuff
up
until
here.
It's
it's
build,
stick:
kima
installer!
Then
it's
deploying
all
the
kima
components
onto
this
mini
q
cluster.
Here
you
can
see
that
schema
is
installed
successfully
and
then
it
starts
running
integration
tests.
B
A
B
It
wasn't
actually
because
mini
coop
needs
to
be
run
with
sudo
because
of
this
VM
driver
none
option
and
for
that
secret
I
actually
am
tried
at
first
and
some
of
the
things
should
be
run
as
root
and
whereas
some
we
shouldn't
be
using
yes
and
I,
couldn't
manage
to
like
successfully
use
it.
That's
why
I
copy
it
over
some
of
the
parts
from
that
script
over
to
here,
so
for
some
I
used
sudha
and
for
others
I
didn't.
Of
course
we
can
take
a
look
and
try
to
use
it's
fully.
B
A
A
A
A
B
A
Okay,
so
next
item
is
about
implementing
and
decrypting
the
secrets
using
K,
V
and
storing
them
on
GCS
and
another
one
is
implement,
reading
the
secrets
so
two
very
close
to
each
other
issues
with
high
priority
and
they
don't
have
an
inner
saying
at
the
moment.
So
I
guess
maybe
we
are
looking
for
happy
volunteers.
A
C
You
see
my
screen
and
into
the
dre
okay
I
see
thumbs
up
so
in
during
this
week.
My
focus
was
on
enabling
other
developers
to
start
working
on
the
propagation.
So,
first
of
all,
we
updated
the
readme
file
on
the
testing
purpose
during
about
a
development
process.
So
is
the
output
from
previous
book.
So
here
you
can
find
information
that
you
cannot
test
Pro
locally.
C
C
So
you
can
just
use
that
script
to
provision
new
cluster
and
test
your
changes,
and
next
we
made
some
changes
in
the
installed
probe
script
and
so,
first
of
all,
at
the
beginning
we
create
h
max
or
10,
and
now
we
store
the
token
in
the
text
file
because
the
token
can
be
required
later.
So
previously
it
was
just
printed
on
the
screen
and
if
we
forget
about
that,
it
was
and
just
lost,
and
the
second
thing
it's
reading
off
token.
C
Previously,
when
you
provide
that
stock
and
it
was
displayed
on
the
console
and
now
we
read
it
in
the
same
way
as
a
password
are
read,
so
we
just
hide
the
input
you
provided.
So
it's
some
small
improvement
in
the
terms
of
the
security.
Also,
we
updated
this
documentation
above
pro
installation,
and
so
for
example.
Here
you
have
some
section
which
describe
how
to
check
if
we
Pro
is
correctly
installed.
C
So
maybe
I
will
reveal
that
currently
we
are
using
only
two
plugins
one
is
trigger
to
allow
you
to
trigger
jobs
on,
for
example,
creation
of
pull
requests.
The
second
is
about
cuts,
so
you
can
add
some
comment
on
your
request
and
the
cut
image
will
be
displayed.
What
is
the
purpose
of
this
plug-in
to
test
if
everything
is
correctly
configured?
C
So
if
your
boat
is
able
to
comment
your
paw
request,
so
this
is
the
only
purpose
of
this
plugin
and,
as
I
said
at
the
beginning,
we
decided
to
work
to
check
our
changes
against
our
fork.
So
this
plugins
am
I
need
to
be
will
be,
will
differ
between
users
and
between
the
final
and
original
installation.
C
Currently,
we
are
working
on
providing
minimal
configuration
for
config
on
so
my
goal
is
to
also
provide
some
template
which
later
will
be
replaced
by
your
some
parts
will
be
replaced
by
your
username,
and
here
we
are
going
to
define
only
one
job
for
you.
I
appeal
a
layer
component
and
later
me,
hopefully
we'll
use
that
to
continue
his
work
and
let
you
check
if
I
said
everything
yeah.
Probably
yes,
so
in
case
of
any
questions,
please
ask
us
on
the
on
our
slack
channel
and
probably
we
can
help
you.
That's
all
thank
you.
C
E
E
Docker
get
Karl,
etcetera,
so
useful
tools
for
all
of
the
built,
and
then
this
is
the
root
imager
and
then
they
create
some
other
images
like
I
know,
g-cloud,
which
use
bootstrap
image
so
I
think
we
should
make
this
time.
We
should
prepare
the
bootstrap
image
and
then
we
should
prepare
the
pipbuck
galangal
but
note
etc.
Unfortunately,
we
cannot
use
the
bootstrap
image
from
Google
for
combinators
originator
of
this
trip
because
they
use
basil
for
their
build
and
we
don't
need
that.
I
think.
B
E
So
the
doctor
integrates
good
solution
and
also
the
bootstrap
imagery
contains
our
let's
say:
jenkees
workspace
similar
does
the
line
acts,
we've
got
a
Google
cloud
client
and
get
and
then
make
a
big
pergola,
big,
big
bug,
node,
etc.
Right
I
think
the
size
of
this
images
is
not
the
problem,
because
the
images
will
be
cursed
to
incarnate
as
not
like
in
the
junkies,
we're
always
the
big
bug
who
was
downloaded,
so
it
will
be
faster
than
players.
Okay,
the
another
part
is
that
they
have
prepared.
E
E
Probably
they
should
also
have
opossums,
it
describes,
contains,
let's
say,
framework
for
testing
and
then
in
every
repository
they
have
wait
a
moment,
a
test
folder
which
count
which
use
test
scripts.
Not
sorry,
not
here,
I
got
it
right
now,
okay,
but
in
every
repository
they
use
that
scripts
for
building
that
repositories.
So
the
first
question
is:
should
we
use
bash
scripts
or
go
scripts
because
we
are
not
specialists
at
Bosch,
but
for
go
language?
We
use
everyday.
So
what
do
you
think
about?
Go
scripts
instead
of
passion.
B
E
C
E
C
E
C
C
E
Example
for
go
script
but,
for
example,
the
press
admit
test
as
I
stripped
from
Kennett.
If
looks
like
that
right
and
then
you
have
parameters
which
control
the
floats
for
the
visitors.
You
need
integration
test
and
then
you
need
to
create
script
which
will
implement
that
functions
and
I,
don't
know
if
we
want
to
make
that
brush
or
maybe
we
want
to
make
a
small
brush
which
we
execute
and
whole
scripts
in
the
go
scripts.
I,
don't
know
it
was
just
as
a
question
suggestion
I
think
that
default
this
is
going
on
here.
E
D
E
D
E
D
C
One
row,
but
here
is
that
we
can
have
quite
huge
released
here
if
we
have,
for
example,
20
components,
so
it
can
contain,
say
20
checks
and
only
one
will
be
really
saying
that
was
succeeded
and
others
was
skip
it.
But
to
be
honest,
I
do
not
check
this
configurable
and
we
this
skip.
It
can
be
not
not
sense,
and
it's
not
displayed
here
so
I
I'm,
not
sure
about
that.
E
C
So
first
one
is
investigation,
how
we
want
to
provide
source
code
for
our
jobs,
so
in
the
Pro,
and
we
in
the
job
definition
that
we
can
and
say
that
job
is
decorated,
and
this
means
that
some
repository
will
be
launched
and
it
would
be
available
on
the
docker
container
and
but
it
doesn't
work
out
of
the
box.
So
when
I
enabled
it
I
see
some
errors
and
also
I
found
that
other
projects,
you
use
a
different
approach.
C
C
C
Ok.
Next
one
is
about
static
analysis
tool
for
our
shell
script.
So
currently
in
the
testing
track.
We
have
a
few
shell
scripts
and
today
me,
how
showed
showed
me
that
they
are
not
not
the
best,
as
he
said,
it's
not
very
it's
quite
difficult
to
write
incorrect
shell
script,
but
still
we
can
use
some
static
analysis
tool
to
improve
them.
One
of
such
tool
is
a
shell
track,
and
here
it's
also
on
point.
Maybe
we
should
define
a
pipeline
for
our
test.
C
E
C
That
so,
please
me
how
to
comment
to
that
issue.
Maybe
we
can
also
add
that
part
of
our
documentation
yeah
right,
okay
and
the
next
issue-
is
about
provision
product
Pro
clusters,
so
Chester
that
it
will
become
figured
against
a
k-map
project.
/,
kima
and
provisioning
of
such
clusters
should
be
fully
automatic,
and
this
task
depends
on
the
issue
about
setting
up
domain
certificates
for
pro
cluster
I.
C
C
Next,
it
would
be
nice
to
have
automatic
update
of
this
official
broadcaster
after
merging
to
the
testing
from
master
master
branch,
so
to
have
it
fully
automatic
you
and
probably
last
one.
It's
about
a
defining
strategy
for
organic
organizing
jobs
in
config
am
so
config.
Yeah
can
be
quite
big
when
we
add
jobs
for
every
component.
Only
for
one
and
file
and
later
it
can
be
difficult
to
maintain
it.
So
we
we
should
check
how
other
people
approach
that
problems,
for
example,
is
so
defined
the
day
strategy,
so
they
use
separate,
separate
files.
C
For
example,
all
periodic
jobs
are
defining
in
one
file
and
I
think
we
should
also
start
investigating
it,
maybe
not
not
the
highest
priority,
but
still
important
thing.
We
should
have
just
done
before.
We
ask
all
the
Vella
pers
to
migrate
to
the
to
the
pro
day,
components.
Ok,
that
was
all
so
if
you
have
any
any
ideas.
Please
comment:
please
start
discussions
in
that
in
that
issues
or
on
an
slack
Channel,
because
that
was
just
what
we
thought
it
will
be
beneficial
to
have,
but
we
are
open
for
other
topics.
That's
all.
Thank
you.