►
From YouTube: Kyma Prow Migration WG meeting 20181207
Description
Meeting notes: https://docs.google.com/document/d/1ljEAoCBJXlxx_ATPyvKZ1KoyFOSIBzEAOkN-2H-HhUY/edit
A
Thank
you
today,
I
will
be
your
host
and
we
have
who
did
take
a
notes
and
agenda
for
today's
meeting
is
its
following
at
the
beginning.
I
will
give
you
the
current
status
at
the
next
priorities
and
the
later
tommix
well
search
would
present
tricky
integration
results
in
up
status.
Ok,
let's
have
a
look
on
what
what
was
done
in
the
recent
week
in
the
testing
for
repository.
As
you
can
see
again
new
record,
we
have
41
a
pull
request
merged
by
six
people.
A
A
Show
our
chicks
and,
for
example,
now
we
are
able
to
click
on
the
details,
link
and
that
goes
very
direct
as
to
the
status
became
a
protocol
and
when
we
click
click
bill
doc.
You
will
see
what
was
done
on
the
job
and
we
were
able
to
do
that
thanks
to
the
spyglass,
but
to
enable
spiders
we
have
to
update
version
of
the
protesters.
So
it
was
there
another
job
which
was
done
by
ex
women
this
week
and
what
else
we
can
find
on
the
talks
in
the
to
accept
column.
A
We
have
also
and
submit
job
for
integration
test
that
runs
on
G
key,
and
so
now,
when
you
merge
the
master
to
integration.
Trips
are
run.
One
is
running.
It
is
on
mini
cube
on
between
machine,
the
second
one
parents
and
creates
Jackie
cluster
and
on
that
run,
schema
integration,
and
we
have
also
one
interesting
problem
recently
on
many
the
jobs
in
deluxe.
You
can
find
the
error
that
you
cannot
access
docker
on
components
built.
A
It
was
closed
that
or
every
job
requires
a
lot
of
CPU
and
memory
and
when
we
run
in
many
jobs
concurrently,
we
have
not
enough
resources
and
at
some
point,
docker
docker
demon
was
stopped
and
our
solution
for
and
we
we
have
two
solutions
for
that.
One.
We
limit
maximum
of
concurrent
jobs
in
the
pro
currently
set
to
10.
When
we
have
a
bigger
caster,
we
will
increase
that
value
and
also
when
we
update
our
pro
components.
A
We
noticed
that
the
the
possibility
to
specify
Iran
if
changed,
for
the
possible
jobs
and
thanks
to
that
we
limited
the
number
of
possible
jobs
running
on
the
integrated
on
the
merging
to
the
master
branch
for
cumin
repository.
So
generally,
we
limit
things
to
that
number
of
jobs
that
are
executed
after
every
match
to
the
master.
A
Okay
and
now,
let's
have
a
look
on
the
status
of
our
epics
and
then
the
most
important
is
about
defining
probe
pipelines
for
keema
component
and,
as
you
can
see,
most
of
the
issues
are
now
in
the
closed
and
closed
State.
So
it's
very
good.
We
have
only
a
few
in
the
review
column,
only
two
in
progress
and
still
we
need
to
define
jobs
for
five
components,
and
we
also
have
some
one
final
task
that
should
validate
if
all
images
built
by
Pro
are
correct.
A
So
do
we
final
final
check
and
I
hope
that
we
will
be
able
to
finish
that
epic
at
the
beginning
of
the
next
week
and
next
epic,
which
is
very
important
for
us,
is
this
hardening
broadcaster
and,
as
you
can
see,
we
already
I
would
say
we
are
somehow
in
the
middle
of
that
epic,
because
we
have
some
closed
issues,
some
to
accept
some
in
progress
and
only
a
few
in
the
to
do
column
and,
as
you
can
see,
in
the
in
progress
column,
we
have
tasks
about
displaying
metrics
on
a
dashboard
using
the
dedicated
g-cloud
project
for
from
setup.
A
A
B
Yes,
give
me
a
second
should
I
share
my
screen
or
just
comments
well,
essentially,
with
respect
to
I
won't
be
sharing,
maybe
because
there
are
a
few
issues
related
to
that
problem.
It's
mainly
about
cleaning
up
of
the
resources
that
are
allocated
by
by
our
integration,
jobs,
which
are
the
ones
that
provision
classes.
Yes,
other
resources,
and
it
took
us
some
time
to
really
find
a
way
to
somehow
get
to
these
resources.
B
I
mean
find
those
resources
using
Google's
API,
and
now
we
think
we
got
all
the
necessary
information,
and
the
good
thing
is
that
for
cleaning
discs,
the
pool
request
is
open
and
there
is
a
tool
written.
The
Porticus
is
open
and
it's
already
approved,
so
I
will
wait
until
the
end
of
today,
because
I
could
merge
it.
But
perhaps
some
of
you
guys
want
to
take
a
look.
So
please
take
a
look.
B
If
you
have
any
comments,
then
then
you're
welcome,
and
so
this
tool
will
allow
us
to
please
clean
disks
and
for
the
networking
resources
we
already
have
a
tool,
but
it's
not
yet
rewritten
nicely
into
golang,
and
we
will
focus
on
that
and
what's
left
are
clusters
which
are
very
easy
to
find
and
delete
and
IP
addresses
and
dns
records,
which
we
already
know
the
way
how
to
find
IP
addresses.
We
can
check
if
they
are
used
by
anything.
B
B
A
B
A
B
B
A
B
Okay,
so
these
two
open
tasks
should
be
finished.
Of
course,
I
mean
cleaning
discs
and
networking
resources
which
are
perhaps
most
costly
for
us
clusters
are
not
first,
they
can
be
easily
found
and
deleted
manually
before
the
time
before
the
next
week,
for
example,
yeah
we
can
do
it,
there
are
labels
already.
So
it's
very
easy
to
find,
and
there
are
not
that
many
clusters,
as,
for
example,
disk
and
IP
addresses
and
DNS
records.
Yes,
we
must
take
a
look,
but
essentially
estas
have
to
wait.
I
guess,
right:
okay,.
A
B
We
we
want
to
proceed
like
this
first
create
tools
in
goal
and
that
could
be
launched
manually.
Yes,
because
then
review
is
easier.
We
can
use
for
review.
Well,
it's
a
bad
word,
but
we
can
for
review
colleagues
that
do
not
necessarily
now
prowl
very
well
and
get
the
acceptance
or
comments
and
yeah
prepare
the
tools
and
in
subsequent
requests
we
can
either
provide
fro
job
or
there
will
be
one
central
probe
shot
that
will
just
launch
these
tools
in
sequence
or
in
parallel.
It
doesn't
matter
and
the
tools
will
do
the
job.
A
B
Proceeding
with
this
task,
because
obviously
I
wouldn't
like
to
make
one
big
pull
request
with
all
of
this
first,
it
would
be
very
late.
I
mean
we
would
have
to
wait
like
two
weeks
before
that.
It's
it
doesn't
make
sense,
and
it
would
be
very
big
for
requests
so
hard
to
review.
So
if
you
agree
with
that
approach,
then
we
can
continue
like
this.
Okay,
I,
don't
need
feedback
right
now
we
can
discuss
it
offline,
okay,.
A
I
have
a
one
one
more
question,
so
currently
we
we
are
performing.
We
are
going
to
perform
cleanup
only
for
orphan
resources,
so
they
are.
We
are
talking
about
the
resources
which
were
created
by
the
job
which
was
later
aborted
here.
So
normally
we
remove
all
the
resources
in
the
regular
job
here,
but
but
I.
B
There
is
one
category
of
resources:
that's
not
cleaned
up
yet
I
I
think
so
well,
I'm,
not
sure,
but
I'm
pretty
sure
this
category
resources
are
load
balancers
because
there
is
now
yeah.
This
is
not
deleted,
I
would
say
synchronously
along
with
the
job.
So
it's
not.
The
job
doesn't
care
from
my
experiments.
They
are
not
cleaned
up.
For
example,
you
uninstall
kena
from
the
cluster.
B
Although
FEMA
somehow
allocates
these
resources,
they
are
not
cleaned
up
with,
and
so,
for
example,
this
this
object
from
the
cluster,
so
yeah
and
even
removing
the
cluster
or
removing
the
castle.
Also
left
I
left
there.
In
so.
For
this
we
must
have
a
parrot
periodic
job.
We
could
then,
of
course,
attach
this
logic
to
the
job
itself.
There
are
cleaned
synchronously,
but
right
now
it's
not
the
case,
all
the
other
resources.
B
B
Something
you
could
refer
to
as
load
balancer,
although
technically
there
is
no
such
thing
in
terms
of
a
Google
API
object.
Load
balancer
consists
of
at
least
three
objects,
and
these
are
health
check.
I
can't
serve
this
URL
mob,
but
this
is
called
like
that,
but
we
can
refer
just
to
them
as
a
load
balancer.
So
this
is
something
that
consumes
our
ootah
and
we,
if
we
exceed
the
quota,
will
be
blocked.
So
it's
essential
to
remove
those
things,
but
as
I
mentioned,
there
is
no
single
entity
in
Google's
API,
more.
B
A
B
Well,
I
would
say
not
right
now
why?
Because
the
Kimura
integration
job
right
now
is
thing
written
in
bash
and
extending
it
with
substantial
amount
of
logic
and
the
script
for
fleeing
this
load.
Balancer
I
already
have
it
in
power,
it's
at
least
as
big
as
the
key.
My
integration
jump.
So
you
are
talking
here
about
really
extending
this
bosch
source
code.
Doesn't
it's
not
really
nice?
Yes,
I
would
rather
prefer
to
keep
it
in
this
period.
B
Job
finish
this
period
up
as
soon
as
we
can
even
run
it
every
two
hours,
no
problem,
so
we
will
have
this
cleanup.
We
have
it
in
short
and
later
on,
gradually
replace
our
existing
bar
script
with
some,
for
example,
go
code.
Some
better
solution.
Go
code.
Is
one
solution.
I
don't
know
any
proper
programming.
Language
would
be
preferred
here,
because,
although
this
job
is
readable,
it's
not
really
maintainable
I
mean
extending
it
in
bash
is
hard,
so
yeah
I
would
rather
prefer
something
tasks
to
rewrite
it.