►
From YouTube: Kyma Prow Migration WG meeting 20181116
Description
Meeting notes: https://docs.google.com/document/d/1ljEAoCBJXlxx_ATPyvKZ1KoyFOSIBzEAOkN-2H-HhUY/edit
A
Hello,
everyone,
it's
I,
can
hear
a
call.
It
is
only.
A
B
A
Okay,
so,
let's
briefly
go
through
agenda.
We
have
a
few
points.
Few
points
over
there,
so
first
one
is
current
status
and
next
next
priorities
by
Adam.
Next
one
overview
of
recent
changes
in
the
test:
infrared
posit
or
II,
and
it
is
by
Carolyn
ass,
so
I
guess
it
is
about
documentation,
interesting.
C
A
B
Okay,
let's
start
review
what
has
been
done
in
the
recent
week.
As
you
can
see,
a
16
pull
request
were
merged
in
the
testing
variable
story.
So
it's
some
kind
of
record
and
also
new
in
your
8
issues
were
created.
Let's
have
a
look
on
a
board
on
board
in
the
and
to
accept
column.
We
have
following
issues.
First
is
to
simplify
applying
jobs
configuration
after
provisioning
protestor.
B
So
probably
you
remember
that
at
some
at
some
time
we
decided
to
split
our
configuration
two
separate
files
and
after
that
change
we
were
unable
to
use
our
our
scripts
to
validate
our
configuration
and
also
update
them.
So
in
this
particular
we
starting
that
problem
and
we
have
also
done
static,
analyzers
tool
for
all
shell
scripts.
B
Next
one
is
this
proposal
for
a
release
process,
or
maybe
the
title
is
a
little
bit
misleading,
but
in
this
story
we
want
to
describe
how
we
can
support
release
process
in
the
pro
from
the
technical
point
of
yours,
and
here
we
describe
potential
solutions
and
we
also
have
two
documentation
tasks
which
were
done
by
Corona,
and
she
today
presented
with
more
details
and
in
column.
We
have
Fred
modeling
issue
and
also
recently
we
noticed
that
sometimes
or
camera
integration
job
fails
because
zone
has
not
enough
resources.
B
So
here
we
want
to
mitigate
that
problem.
Somehow,
and
maybe,
let's
have
a
look
on
the
presentation
which
I
show
you
on
the
previous
meeting,
where
we
define
that
our
next
goal
is
to
enable
migration
of
components,
and
we
have
some
perquisites
what
we
need
to
do
before
that.
So
first,
we
have
at
least
one
working
example.
We
have
two
blockers
for
that.
B
First,
it
was
the
issue
about
proposal
for
release
process,
so
in
this
granted
in
the
accept
column,
and
we
need
to
define
post
submit
job
for
your
IP
layer
and
previously
we
said
that
is
easy-peasy,
but
unfortunately
we
notice
that
on
post
submit
jobs,
we
are
unable
to
configure
run
if
change
parameter,
and
this
can
cause
in
the
situation
that,
after
merging
to
the
master,
all
our
components
will
be
rebuilt.
So
probably
so
far,
we
can
accept
that,
but
we
also
create
an
issue
to
somehow
support
such
situation
from
the
feedback
perspective
and
add
some.
D
B
B
A
Thank
you
next
point
is
by
Karolina
and
documentation,
yep.
C
Can
you
see
it
yes,
okay,
so
basically,
I
would
like
to
give
you
an
overview
of
changes.
I
made
in
my
last
full
request
and
I
restructured
the
existing
readme
files,
the
whole
repo
and
I
also
added
some
new
ones
and
I
kept
with
the
structure
in
which
I
gave
an
overview,
a
short
overview
of
the
folders
and
subfolders,
and
also
the
the
project
of
the
folder
structures.
C
Like
you
see
here
so
in
the
main
readme
document,
I
provided
already,
some
information
about
the
pro
and
I
gave
the
links
also
to
directed
the
main
readme
and
general
documents
that
are
going
to
be
placed
now
in
the
docs
and
the
pro
subfolder
in
the
pro
readme
I
gave
a
bit
of
information
about
what
Pro
is.
What
are
the
basic
rules
when
working
with
Pro
so
I
use
the
information
that
were
already
there,
but
most
of
that
I
extracted
actually
to
them
to
the
docs
folder.
C
If
you
are
familiar
with
guidelines,
we
have
so
I
will
use
the
time
to
say
a
few
words
about
them.
So
we
basically
have
the
style
and
standards
guides
and
also
about
the
formatting
when
you're,
adding
some
diagrams
I
would
really
recommend
you
to
have
a
look
at
the
one
as
well,
and
the
same
is
with
the
main
reading
meat.
So
I
used
this
structure
and
you
can
find
it
in
the
community
repo
under
the
templates.
So
if
you're
good
just
try
to
follow
that,
so
that's
all
mind.
A
E
All
right,
can
you
see
it
now?
Yes,
okay,
perfect,
as
Adam
already
explained
this
is
this
was
one
of
the
prerequisites
for
starting
the
migration,
so
I'm
just
going
to
present
what
is
the
current
status
so
all
right
now
we
are
going
for
a
while,
actually
theory,
the
for
our
Marsh
there's
just
one
more
job.
That's
we.
We
want
to
define
and
then
we'll
be
done
with
it.
Okay,
what's
jobs
we
created,
you
can
find
them
under
testing
for
pro
jobs
and
the
testing
for
jobs.
First
one.
E
Okay,
this
one
is
the
two
weatherdata
scripts.
So
whenever
you
make
a
change
in
any
of
the
bash
script
files
in
the
testing
for
Apple,
this
job
will
be
triggered
and
it's
it's
using
shell,
shell,
shake
so
elevator
scripts
and
it's
it's
a
required
job.
So
without
this
is
green,
you
can't
merge
the
PR
and
also
we
have
this
weather
date
configured
job.
E
This
was
written
by
jakub
and
what
it
does
is
it's
if
you
have
a
change
in
one
of
plugins,
config
ml
or
just
one
of
the
job
configuration
files
and
and
they
will
be
weather
dated
by
using
configurators
provided
by
pro,
and
if
there
is
a
problem
with
its
again
it's
required.
So
you
can't
merge
the
PR
and
that's
yeah.
That's
pretty
much.
It.
E
E
A
So
I
have
a
question
about
that:
building:
bootstrap
image
in
bootstrap
container.
Do
you
recall
our
talk?
I.
A
D
A
B
The
risk
is
that
you
can't
imagine
the
entire
registry
went
out
because
it
was
captured
by
elion.
You
can't
actually
just
change
it
to
docker
hub
and
repeat
the
entire
procedure,
because
you
like
the
in
first
initial
image.
So
actually
that's
the
only
risk
which
is
kind
of
small.
In
my
opinion,
because
yeah
we
are
using
the
previous
image
to
build
the
next
one
kind
of
so.
If
everything
goes
away,
we
don't
have
this
first
one.
We
have
to
paint
it
manually.
A
B
I
think
so
well,
the
bad
news
is
that
we
did
not
manage
to
finish
it.
We
wanted
to
do
it
until
this
meeting,
but
we're
almost
there
what's
missing,
actually
is
only
building
Kimani
stellar
image,
which
is
yeah
required,
but
I
hope
it
will
be
merged
soon,
because
we
discussed
many
already
many
many
issues
and
I
would
like
to
give
you
an
update
on
this
particular
job.
So
this
is
the
biggest
job
by
far
from
all
the
ones
that
we
have
it's
kind
of
a
pipeline
here.
B
So
it
consists
of
many
steps
and
actually
we
should
treat
it
kind
of
like
like
work
in
progress,
or
you
know
draft
for
the
job,
because
the
next
steps
that
we
would
like
to
do
with
this
job.
You
can
take
a
look
of
course.
The
requests
it's
a
bar
script-
it's
not
that
big,
but
there
are
many
supporting
bar
scripts
around
it.
We
want
to
refactor
it
so
that
it
will
run
faster
because
the
main
the
biggest
pain
with
existing
Jenkins
pipelines
are
the
execution
times,
especially
on
poor
requests.
B
So
I
would
like
in
our
explaining
to
address
this
issue
and
have
some
time
next
week
to
once
this
job
in
this
state
as
it
is,
it
will
be
merged,
so
it
will
execute
all
necessary
steps.
It
will
run
on
pull
requests.
Fine,
we'll
have
first
feedback,
which
is
time
of
execution.
We
can
made
some
steps
parallel,
especially
in
creating
DNS
entries
is
time-consuming.
It's
the
time
of
execution,
it's
changing
from.
Usually
it
oscillates
around
three
minutes.
It
can
take
five
minutes,
six
minutes.
B
It
can
take
one
and
a
half
minute
of
obviously
it's
it's
changing
and
we
need
to
create
a
day,
a
DNS
entry
per
each
run.
Unless
we
want
to
kind
of
manage
pool
of
DNS
addresses
which
we
it
is,
we
don't
want
to
do
it.
Yes,
it's
simpler
to
just
provision.
This
DNS
address,
at
least
for
the
time
being.
At
the
moment,
I
mean
and
so
yeah.
B
We
want
to
shorten
the
time
to
shorten
the
time
we
want
to
run
in
our
parallel,
perhaps
still
using
bash,
and
then
we
will
have
a
number
that
we
can
compare
if
this
parallel
running
is
worth
it
or
not,
and
once
this
proves
to
be
worth,
we
would
like
to
perhaps
rewrite
this
entire
pipeline
in
some
hotel
suite
at
to
maybe
go.
Why
not?
Because
then
we
will
have
some
parallel
execution
there
yeah
so
go
fits
nicely
into
such
a
problem.
B
Yes,
it
is
why
we
sometimes
might
not
be
able
to
directly
reuse
existing
jobs
that
are
interested
in
FRA.
For
example,
it's
now
obvious
we
forgot
about
it
that
we
will
still
need
probably
a
new
image,
because
we
need
help
and
home
is
right
now,
not
a
part
of
the
bootstrap
and
I
would
assume
that
we
don't
want
to
put.
However,
just
for
this
one
case
or
we
do
I,
don't
know
yeah.
These
are
things
like
that.
B
Yes,
so
this
big
pipeline
is
kind
of
special,
it
does
much
more
things
than
the
normal
one,
so
yeah
I
would
say
we.
You
should
just
continue
on
that
to
proceed
quickly.
To
get
feedback
and
then
later
on
once
we
will
have
it
working
and
we'll
find
a
way
to
make
it
as
fast
as
possible,
because
actually
that's
the
goal
of
the
stripe
line
yes
to
to
support
our
users,
and
they
don't
want
to
wait
so
I
would
say
that's
the
primary
goal.
B
Then
we
could
decide
about
the
technology,
for
example,
how
to
rewrite
it,
because
it's
kind
of
obvious
to
me
that
it
should
be
rewritten
at
the
state
it
is
and
right
now
it's
maintainable.
It's
readable.
There
is
nothing
wrong
about
it.
At
least
from
my
point
of
view.
You
can
judge
yourself,
of
course,
but
in
the
long
run
I
would
say
some
better
tool,
especially
for
parallel
execution
should
be
used
for,
for
that.
Bye,
bye
so
and
the
status
is
I
would
kindly
ask
you
to
review
it.
B
I
will
I
will
post
a
message
on
on
working
group
shot
once
it's
ready
because
I'm
right
now
rebasing,
so
some
things
might
not
be
polished
so
to
speak,
but
I
believe
within
an
hour
or
2
hours.
I
will
post
a
message
so
that
it's
ready
for
you
again,
I
updated
the
documentation
mainly
and
yeah.
I
would
like
just
to
merge
it
and
then
continue
on
in
the
next
in
a
subsequent
tasks
to
improve
it.
So
that's
my
plan
about
this
pipeline.
A
Ok,
comment
from
my
site:
I
think
that
at
this
point,
yeah
stomach
set.
It
is
the
biggest
pipeline
we
have
at
the
moment,
but
we
know
that
new
pipelines
are
coming
and,
from
my
point
of
view,
there
is
high
probability
that
we
will
be
duplicating
functionality
in
in
that
future
pipelines
like,
for
example,
post
submit
on
master
on
release
pipeline
stuff.
Like
that,
and
at
this
point
we
should
at
least
think
if
it's
not
a
good
time
to
try
to
cut
this
pipeline
into
small
chunks.
A
E
A
A
A
F
F
B
So
your
vision
would
be
to
not
to
have
few
jobs
that
are
somehow
orchestrated,
but
each
pipeline
is
a
single
job
and
it
executes
steps
using
reusable
the
length
library
of
steps
kind
of,
and
then
we
can
do
what
we
want
in
terms
of
making
things
are
a
little
or
passing
arguments
from
one
step
to
another.
Something
like
this.
Yes,
that's
correct,.
D
A
Cluster
setup
DNS
stuff
like
that,
then
we,
if
I'm,
not
mistaking
installing
dinner
itself,
then
we
are
installing
Kakuma
and
at
the
end
we
are
testing
that
so
my
idea
would
be
to
have
smaller
jobs
like
really
small
job
which,
for
example,
execute
install
comma
or
provision
cluster
or
set
up
cluster.
In
that
way,
we
can
start
building
bigger
pipe
lines
just
to
picking
up,
tops
small
jobs
and
orchestrating
them
into
bigger
one
and.
A
B
F
A
F
B
B
E
D
B
B
Okay,
so
that's
it
as
I
would
say:
architectural
decision
not
here,
okay,
that
it's
reasonable
to
me.
It
allows
for
use.
However,
it
might
require
to
come
out
with
a
small
DSL
on
our
side
because
they
needed
a
job
definition
to
either
we
write
a
script.
Okay,
and
this
is
our
DSL
or
we
go
a
bit
more
abstract
and
in
create
there.
So
that
says:
okay
now,
for
example,
build
the
image
that
set
up
networking
in
our
case,
set
up.
B
B
We
need
that
to
actually
test
this
thing
on
the
cluster,
because
that's
our
deploy
elbow
our
artifact,
that's
a
separate
step,
and
so
on
for
the
time
being,
I
think
we
can
live
with
a
bash
script
yeah,
so
the
only
concern
would
be,
but
we
can
work
it
out
how
to,
for
example,
make
things
parallel
and
how
to
make
them
sequential,
because
that's
what
we
need
for
this
job,
but
I
guess
this
is
a
technique
of
the
detail
that
we
can.
We
can
find
solution
for
so
okay.