►
From YouTube: Kyma Prow Migration WG meeting 20181012
A
Just
let
me
know:
oh
yeah
I
can
see
that
okay,
perfect
okie
dokie.
So
it's
our
second
meeting
and
maybe
let's
get
started
with
our
agenda
for
today.
So
first
of
all,
we
are
going
to
review
our
action
items
from
the
previous
meeting
for
a
starter.
Then
we
have
an
two
additional
topics.
One
is
secret
management
for
procrastinator.
Ik
will
present
that
for
us
and
another
topic
is
defined
development
workflow
for
test
infra
by
a
demonstrative
car.
A
A
A
B
You
update
us
quickly,
of
course,
I
basically
prepared
provisioning
scripts
after
getting
some
feedback,
I
improved
them
a
little
bit
as
well,
but
the
taking
secrets
from
Walt
part
is
missing
mostly
because
it's
not
decided
yet
and
I.
Think
me.
Hull
is
already
going
to
present
us,
which
secrets
management's
to
go,
and
after
it's
decided,
I
will
update
that
one
as
well.
B
B
I'm,
actually
here
there
is
one
discussion
item
if
we
want
to
install
more
than
one
prowl
into
a
cluster.
This
is
one
thing
and
another
thing.
Actually,
while
looking
at
the
issues,
I
realized
the
comments
from
one
of
the
lead
developers
of
testing
for
a
team.
They
are
saying
for
security
reasons.
We
should
always
run
to
pull
out
the
jobs
in
a
different
cluster
than
the
prow
cluster
components.
B
B
B
Yes,
sir,
come
another
cluster
or
another
namespace
in
another
cluster
and
yeah.
This
is
one
of
the
one
of
the
comments
from
one
of
the
developers
actually
from
testing
fora
and
they
say
it's
highly
insecure
to
run
the
jobs
inside
the
same
cluster
with
the
pro
components,
because
we
are
supplying
secrets
to
the
components
and
it's
it
they
say
is
quite
easy
to.
You
know
pick
up
this
secrets
by
like
creating
Molly,
malicious
jobs.
B
I
don't
know
if
it's
really
it
for
us
at
the
moment,
but
you
know
I
think
whatever
we
start
with
the
will
continue
for
a
long
time
like
if
we
start
with
something
simple
I
think
it
would
be
hard
to
change
its
later
on.
This
is
one
of
thought,
but
another
thought
it's
it's
just
overkill
at
moments.
I,
don't
know
what
do
you
think.
E
E
E
B
E
B
B
D
A
B
Running
to
browse
inside
wonka,
master
I
haven't
tested
it
before,
but
I
saw
a
couple
of
issues.
Actually
this
is
one
of
them.
It
says,
as
soon
as
the
user
switch
to
a
different
names
phase
everything
booms
and
it's
still
open,
so
maybe
for
now
wish.
We
can
just
have
this
issue
done
and
then
later
on.
We
can
also
think
about
it
like
having
another
trial
for
testing
purposes,
but
it
should
be
a
different
issue.
I
think,
okay,.
A
Do
you
want
to
discuss
all
your
items
and
now
or
you
don't
mind
if
I
will
cut
something
else
in
between
yeah
no
problem?
We
can
okay,
okay,
so
next
one
will
be
my
question
for
one
two
components
and
as
far
as
I
remember,
we
agreed
on
component
1j
s
component
and
one
go
component
who
cash
any
details.
I
know.
F
So
the
only
detail
for
now
is
that
we
I
just
specified
more
in
the
tickets,
so
we're
with
one
person.
It's
gonna,
be
me
how
who
D
who's
at
the
coal?
He
will
drive
that
topic
further,
so
we'll
just
migrate
to
our
API
layer,
which
is
a
go
project,
and
probably
one
or
maybe
all
our
J
s
fuse
and
properly
everything.
So
others
can
then
continue.
That's
the
goal.
F
D
F
A
A
D
A
The
next
one,
the
next
one,
is
if
investigate
provisioning,
Google
cluster
for
integration
test,
and
this
one
is
on
me
so
abroad
update
from
myself.
We
didn't
really
start
working
on
that
as
per
se,
but
we
already
have
script
bash
script,
provisioning
Google
cluster,
so
it
can
be
easily
reused
in
browse
scenario
and
stir.
The
issue
is
in,
let's
say,
installation
team.
A
A
B
I
recently
started
working
on
that,
so
there
are
two
ways
to
do
it
on
on
a
virtual
machine
on
google
klutz
one
way
is
enabling
this
nested
virtualization,
which
looks
overly
complicated
for
our
scenario.
That's
why
I
first
tried
to
do
it
with
using
TM
driver,
none
option
of
mini
cube
so
basic.
Basically,
we
can
provision
a
VM
from
Google
Cloud
by
using
chicklets
SDK.
So
it's
actually
will.
B
B
E
B
A
A
D
A
G
E
A
A
E
E
Can
you
see
yes,
ok,
great,
so,
basically,
GCP
Google
doesn't
provide
any
vault
as
a
service.
Let's
say
the
only
thing
which
they
provide
is
KMS
key
management
service,
or
something
like
that,
which
is
a
service
for
encrypting,
not
for
storing
secrets,
so
also
they
recommended
it's
a
way
to
manage.
Secret
management,
which
is
described
here
on
this
link,
is
to
use
the
key
M
s
for
encrypting
the
secrets,
but
storing
them
somewhere
else,
for
example
on
DCP
storage.
E
So
one
of
option
to
use
if
you
want
to
use
only
GCP
is
to
use
this
GC
pkms
and
for
encryption
and
stir
those
secrets
in
in
a
storage
bucket
somewhere.
Of
course,
this
access
to
the
kms
and
the
storage
bucket
will
be
only
for
selected
people,
which
will
be
responsible,
for
example,
for
creating
the
cluster
and
administrating
them,
etc.
E
A
E
Another
one
is
to
use
full-blown
solution
for
secret
management
at
record
vault,
but
for
me
it's
a
little
bit
overkill
just
to
store
I,
don't
know
how
many
but
a
few
secrets
for
the
pro
maybe
another.
One
also
is
to
use
something
in
what
we
have
internally,
of
course,
which
I
started
that
we
should
not
be
coupled
with
our
internal
Jenkins,
vault
and
I.
Don't
know
if
we
want
to
go
that
way
to
make
such
coupling
with
something
internally
here
in
our
company.
E
It
can
be
part
of
this
installations,
raped,
I,
think
I.
Guess
we
would.
We
will
manage
those
secrets.
Manually
I
mean
someone
with
proper
rights,
will
get
access
to
kms
and
to
a
DCP
storage
bucket,
put
secret
generated
secrets
from
I,
don't
know
some
github
tokens
or
some
other
secrets
and
put
them
there
encrypt
that
and.
G
A
A
E
A
D
A
A
D
Okay,
so
I
defined
issue
with
questions
which
come
to
my
mind,
which
I
think
blog
has
a
little
bit
before
we
really
start
working
on
the
implementation
of
the
pro.
So
purpose
of
this
in
task
is
to
answer
them
and
document
somewhere.
So
first
question
is
and
how
do
you
want
to
work
on
the
changes
in
the
pros,
so
it
is
possible
to
Narron
Pro
locally
on
mini,
cube
and
then
commit
our
changes
and
then
test
on
the
cluster
slimmin.
Do
you
have
an
experience
with
that?
I.
D
I
have
the
same
experience
and
but
I
was
testing
it
for
a
short,
shorter
period
of
time,
and
so
and
so
the
next
question
is
how
many
protesters
we
want
to
have,
because
probably
we
want
to
have
one
Pro
which
contains
you
know
most
up-to-date
and
final
solution,
but
we
need
also
other
crossword
for
the
testing
or
for
changes
and
how
we
want
to
organize
that.
Can
we
ask
for
some
distinct
clusters
and
who
can
we
ask,
is
my
question
to
all
of
you.
A
G
Think
that
the
cost
is
not
so
big,
so
the
the
common
sense
rule
is
that
you
can
have
as
many
clusters
as
you
want
unless
you
delete
them
when
you
stop
working
on
that,
so
the
provisioning
is
pretty
fast.
So
setting
up
the
cluster
is
about
three
minutes
and
installing
Pro
on
that,
probably
the
same,
maybe
shorter.
G
D
G
D
G
D
D
Will
see
maybe-
and
we
have
also
a
discussion
on
the
slack
if
we
should
currently
run
on
define
our
jobs
on
the
NEMA
project,
FEMA
or
use
some
different
repository,
for
example,
fork,
and
because
we
were
afraid
if
we
can
somehow
block
and
development
on
the
Prima.
Because
of
some
problems
with
the
pro
configuration.
D
But
as
far
as
understands
women
said
that,
as
far
as
we
do
not
add
too
many
plugins,
it
would
be
safe
to
use
the
real
Kimo
repository,
but
I'm
also
thinking
if
we
should
create
some
test
repository
which
allow
when
we
can
create
some
dynamic
homemade
that
puts
trigger
Pro
jobs.
What
do
you
think
about
that?.
F
D
F
D
D
What
should
be
when
we
are
when
we
will
know
that
we
can
disable
that
on
our
internal
CIS,
for
example?
Currently
we
have
I'm
not
sure
how
it
is
right
now,
but
currently
we'll
have
some
differences
between
releasing
recent
kima
and
merging
two
masters.
So
we
also
shoot
right
now
also
take
into
account
that
when
we
bring
configuration
from
Jenkins,
we
should
also
be
able
to
release
from
the
pro
job.
Yes,.
D
D
A
B
What
I
had
in
mind
in
the
beginning
to
have
one
built
shop
for
all
the
components,
because
that
builds
shop
would
only
call
make
comments,
and
you
know
it
doesn't
matter
if
it's
a
simplification
or
or
:
components,
and
for
deaths
for
only
one
bill
job.
You
know
it
makes
sense
to
have
one
unified
bill
packaging,
but
I,
don't
know
what
do
you
guys
think
so?.
D
As
far
as
I
know,
I
had
commented
that
it
looks
on
the
one
docker
image
we
installed
tools
for
the
golang
and
for
node.js.
So
this
was
my
question.
If
we
should
do
that,
so
I
think
we
can
also
have
another
approach
when
we
have.
Maybe
maybe
it
was
suggested
by
me
how
that
we
can
have
a
one
on
base
image
with
some
tools
which
are
areas
and
from
that
image
will
create
two
more
docker
images,
one
with
the
node
stores
and
the
second,
with
the
galactose.