►
From YouTube: Kyma Prow Migration WG meeting 20181221
Description
Meeting notes: https://docs.google.com/document/d/1ljEAoCBJXlxx_ATPyvKZ1KoyFOSIBzEAOkN-2H-HhUY/edit
A
I'm
still
recording
and
welcome
everyone
on
HTML
working
group,
emigration
and
today,
I,
will
be
your
host
and
remain
a
quest
is
taking
notes
and
I've
been
this
following
at
the
beginning,
I
will
present
the
current
status
and
expire.
It
is
as
usual
and
next
two
topics
from
me:
how
could
they
you
present
the
status
of
immigration
to
a
new
DCP
project
and
he
will
describe
how
to
work
with
brow
when
Jenkins
is
gone.
So
at
the
beginning,
let's
check
what
we
have
achieved
in
the
recent
week.
A
27
poor
requests
were
merged
and
let
me
go
through
the
most
important
of
them,
and
so,
for
example,
we
we
made
a
lot
of
improvements
in
all
periodic
jobs
that
are
responsible
for
cleaning
resources.
For
example,
we
improve
disk
cleaner
logs
and
we
also
created
a
job
and
that
removes
orphaned
clusters
and
I,
probably
also
orphanage
up
for
removing
or
front
due
to
a
machine
instances.
A
What
is
next,
you
know
we
are.
Additionally,
we
disable
the
functionality
aqua
tutors
from
prowl.
So
now,
when
external
contributors
wants
to
create
a
pull
request,
he
will
able
to
do
that
that
what
jobs
will
not
be
triggered
automatically.
So
he
need
to
ask
someone
from
the
Timnath
team
to
trigger
the
test
by
adding
comment
there.
A
So
let
me
pretty
quickly
show
how
it
looks
like
in
the
dogs
directory
and
we
updated
tools,
documentation,
Pharisee,
schema
release,
process,
and
here
the
most
important
thing
is
that
we
are
going
to
most
of
the
work
would
be
done
on
the
poor
on
the
poor
request,
and
we
have
also
one
what's
possible
job
that
would
be
responsible
for
creating
it
had
police
and
at
get
attack,
and
here
we
also
have
detailed
action
plan
for
release.
What
wasn't
to
be
done?
A
Also,
the
calculation
of
the
release
image
tag
is
much
more
simple
that
we
previously.
So
it
is
also
important
to
note
is
that
and
we
provide
also
immigration
guide
for
release
jobs.
So
here
you
have
more.
We
are
more
focus
how
to
define
job
for
a
specific
component
and
one
more
thing,
because
I
think
we
should
be
all
be
aware
that
these
risk
but
assists
not
yet
you
know
final,
so
and
together
with
Magda,
we
created
an
issue
and
make
this
process
more
robust.
A
So
here
we
describe
what
a
potential
problem
we
can
see
and
how
they
can
be
solved.
So
please
also
familiarize
yourself
with
that
issue
with
all
comments
you
can
find
here,
because
we
need
for
your
help.
Maybe
you
can
find
better
solutions
for
our
problems
and,
let's
get
back
to
to
that
boards
and
what
else
Oh
small,
but
very
nice
feature
with
a
team
on
branding
to
cross
or
whenever
you
go
to
the
status
built
in
a
particular
and
now
we
didn't
see
a
team,
a
logo
and
instead
of
covering
this
logo
on.
A
You
you
and
we
also
start
disabling
some
Jenkins
jobs.
For
example,
we
have
here
community
repository
and
probably
that's
all
now,
let's
go
to
our
board,
and
here
we
have
this
today,
the
most
important
a
bit
about
providing
pipelines
for
component
and
as
you
can
see,
all
issues
are
closed.
Only
two
are
to
accept.
Much
probably
today
will
move
its
to
close.
So
in
the
exception
we
have,
for
example,
is
about
validating.
A
If
all
images
by
build
by
pro
are
correct,
so
we
validate
that
and
now
everything
is
fine
and
also
please
have
a
look.
What
is
the
status
of
this
epic,
adding
broadcaster?
So
recently
we
make
a
huge
progress
and
we
also
clean
up
a
little
bit
at
epic,
so
we
have
only
two
issues
that
are
broke,
one
in
progress
and
one
interview
and
the
rest
is
closed
or
accepted.
A
What
are
the
they
didn't
explain,
so
we
need
to
work
on
the
prop
I
plan
for
releasing
a
mess,
so
we
need
to
provide
pipelines
for
every
component
as
also
as
jobs
responsible
for
really
specific
tasks,
and
we
turn
on
the
next
week
to
start
the
discipling
Jenkins
plans
and
enabling
fully
enabled
projects.
We
hope
that
we'll
finish
it
till
the
end
of
next
week
and
that's
all
from
my
side.
So
now,
let's
go
to
the
next
point
in
our
agenda
and
we
have
hood
me:
how
are
you
ready?
A
B
Perfect,
okay
about
the
status
of
the
immigration,
proud
to
the
new
GCP
project,
assume
some
of
you
know
already
know
the
progress
being
fully
integrated
to
new
project
right
now
it
is
limited
to
only
I.
Don't
four
people
have
access
to
large
projects,
so
I
also
uploaded
every
secret,
also
for
each
job,
a
user
I
bought
and
I
think
we
finished
that
process.
So
the
production
faster
is,
let's
say
in
secure
environment
and
an
equal
sign
here.
B
B
B
So,
let's
move
to
the
next
topic
about
the
junkies
and
prowl
as
we
are
starting
moving
our
repositories
to
prowl
and
we
are
going
to
discipline
Jenkins
I
decided
it
will
be
fine
to
present
how
to
work
with
brow,
because
right
now,
before
my
yesterday
committed,
we
had
two
repositories
immigrated
to
prowl
and
mostly
on
that
the
reserves
people
from
our
working
group
works
because
it
was
test
infra.
This
was
the
repository
where
we
started
with
brow
and
we
tested
everything
and
also
I.
B
Don't
weeks
ago,
I
emigrated
website
depository
because
there
was
a
little
complicated
if
we
work
with
Jenkins,
so
I
started
my
team,
how
he
works,
etc.
Right
now,
I
also
move
at
the
community
repository
and
there
is
a
poly
story
where
lots
of
people
use.
So
it's
really
good
to
show
how
to
work.
I
wish
present
everything
on.
Let
say
on
s
if
repository,
because
on
this
repository
we
have
I,
know
14
jobs
or
something
like
that.
So
we
better
looks.
B
B
You
can
see
we
have
lots
of
jobs
here.
Most
of
the
jobs
are
skipped
because
I
didn't
change
the
face
that
are
builded
by
digit
ops.
So
we
define
delegates
pattern
that
skip
the
jobs
and
unsee
only
as
you
can
see,
I
need
to
wait
for
two
jobs
because
I
change
it
to
five.
It
was
a
markdown
file
and
also
I
change
at
one
of
the
tests.
B
Ok,
so,
as
you
can
see,
the
test
track,
governance
also
start,
but
that's
if
revised
ed
pros
started.
I
can
go
to
the
details
from
this
view
here
and
now.
I
can
see
that
my
job
is
in
progress,
but
I
can
see
what
is
the
status
of
that
job?
On
the
Status
page,
we
see
that
I
figured
to
drop
for
my
pull
request
and
we
can
monitor
it
here.
B
We
can
use
the
spyglass
like
here
and,
as
you
can
see,
this
job
finish
and
this
access-
and
there
are
no
issues
with
me
and
also
there
is
another
job
and
that
still
is
in
progress,
but
it
should
fail,
because
I
have
made
some
change
that
we
didn't
pass.
So
we
need
to
wait
a
moment
for
that
put
the
result
of
that
job.
Ok,
so,
as
you
can
see,
the
status
of
governance
job
is
also
still
available
here.
So
you
can
go
to
the
details
from
that
page.
B
B
B
Ok,
I
will
write
anything,
and
now
you
propose
that
change
right
and
one
moment
Oh.
As
you
can
see,
the
job
already
finished
and
the
prow
send
a
message
to
the
owner
of
this
request
that
some
of
the
tests
failed,
and
you
heard
the
power
also
said:
what
do
you
can
do
with
the
stats,
because
we
control
the
problem
by
the
comments
on
the
poor
request.
There's
no
option
like
a
jenkees
that
doctor
carrying
the
jobs
like
play
button
simply
that
all
communication
with
brow
is
done
by
a
comment.
B
So,
as
you
can
see,
I
can,
if
this
is
let's
say
some
job,
that
fighting
around
randomly
exactly
that
we
can
Rattus
that
job.
But
ok,
let's
assume
that
is
that
kind
of
job
that
we
do.
Let's
say
commit
aggression
that
sometimes
it
fails.
So
when
I
write
a
test
like
here,
it
will
trigger
on
it
that
job
that
fails.
In
that
case,
it
should
be
that
prowl
wait
a
moment.
Does
it
need
to
be
triggered.
B
Something
ok
it
already
that
is
triggered,
but
we
need
to,
as
you
can
see,
only
one
job
has
been
triggered
because
I
said
the
rat
test,
so
it
needs
to.
It
starts
on
the
jobs
that
fails,
but
if
I
want
to
start
all
tests
that
are
for
this,
let's
say
for
requests
or
files
were
rendered.
I
should
write
the
test
all
so
then
all
tests
for
this
publication
will
be
triggered,
but
only
these
necessary
jobs,
but
sometimes
we
may
want
I'd
no
trigger
XD
C's
one
job
to
test
something.
B
Okay,
as
you
see,
the
valet
Pro
is
in
progress.
The
infra
governance
left
is
in
progress
and
the
second
one
right
they
Pro,
because
I've
write
the
test.
All
okay
and
also
boost
up
image
is
also
in
progress,
because
I
explicitly
asked
for
that.
Okay,
so
what
should
I
do
now
when
I
will
fix?
My
poor
request?
Sorry
change
the
fight
that
it
should
work.
You
can
see.
I
change,
the
name
of
the
branch.
B
B
If
this
everything
is
triggered,
but
what
increase
with
people
who
are
out
not
off
from
our
organization
here
away,
I
will
Kate
the
poor
request
when
I
change
at
the
markdown
file
and
as
Adam
mentioned,
with
the
saved
at
the
rock
to
test
plug-in
so
people
from
who
are
not
a
member
of
our
nation.
We
see
that
message
so
the
prowl
welcome
that
person
and
said
that
he
needs
to
wait
for
a
person
from
our
organization
who
will
trigger
the
tests.
B
For
example,
if
I
as
a
someone
from
outside
we
write
tests,
all
the
tests
will
be
not
triggered.
That
is
important,
because
this
is
for
security
reasons,
because
we
have
secrets
in
prowl
and
seta.
So
we
want
to
test
only
trusted
code.
So
right
now,
as
you
can
see,
that
jobs
are
not
triggered,
but
when
I
as
a
member
of
organization
are
we
go
to
that
pull
request
back
here
and
I
will
write
the
test
all.
B
Unfortunately,
when
we
disable
the
octo
test
plug-in,
there
is
no
longer
label
that
mentioned
the
NICHD
octo
test,
so
we
don't
have
a
easy
way
to
monitor
if
the
pull
request
from
people
who
are
not
a
members
of
our
organization,
so
we
need
to
watch
them
that
this
is
a
four
Oh.
Also,
whenever
I'd
Azzam
hottest
you
see,
I
cannot
trigger
that
job,
but
as
I'd
rather
ask
me
how
good
the
job
worth
to
guard
it's
important
each
and
every
time
when
that
person
will
change
something.
B
B
Okay
and
the
last
thing
about
releasing
our
components
so
like
in
Jenkins,
where
we
merge
pull
request
to
the
master
branch,
we
will
build
the
image
to
the
develop
a
direct
order
on
GCR,
so
there
is
a
way
for
looking
for
that
jobs.
We
need
to
filter
by
post
admin
jobs.
All
right,
you
can
choose
the
repository.
Let's
say:
I
want
looking
for
a
job
that
was
stickers
for
crema
repository
for
component
and,
let's
say
I
need
some
component
in
math
component.
I
know
up
controller.
B
Oh
sorry,
there
is
no
job
for
our
controller,
so
maybe
I'll
get
for
something
else.
Oh
I'm,
looking
for
this
job
right,
let's
say
so:
I
just
put
the
name
of
that
job
here
and
then
we
can
easily
find
it
and
you
can
go
to
the
locks
of
that
job
and
you
can
find
the
tag
of
this
image
and
then
you
can
go
to
the
resource
of
camera
and
you
can
bump
image.
D
C
B
B
Triggered
but
person
who
is
outside
internalization,
it
has
to
be
triggered
by
someone
from
recession
and
also
I,
don't
know
about
this
if
male,
if
this
also
is
blocked,
I
don't
know,
but,
as
you
know,
this
is
a
plugin
that
will
put
the
image
of
the
cut
like
here.
This
works.
This
works
for
battle
videos
as
me,
and
also
the
list
of
the
command
is
listed
on
the
prowl,
come
on
help
and
you
can
see
the
test.
B
D
A
C
Yes,
if
I
could
briefly
give
you
the
status
of
these
cleanup
jobs,
I
did
not
put
it
in
the
agenda.
I'm.
Sorry
because
I
had
planning
the
sprint
planning
just
today
from
the
morning,
so
yeah
I
didn't
manage
to.
So
let
me
just
very
briefly
explain
how
it
works
and
what's
not
done
it
so
there's
we
are
all
then
always
this
shirt
I
will
show
my
screen,
which
one
is
this
I
think
it's
this
one.
D
C
C
These
tools
run
on
different
schedule.
Now
it's
up
to
the
production
cluster
administrator
to
fine-tune
the
schedule
for
that
we
need
a
bit
more
time.
Of
course,
we
don't
know
how
frequently
we
should
clean
up
running
them
too.
Frequently
can
also
cause
problems
because
it
consumes
resources
and
can
perhaps
provide
some
some
limitation
for
full
actual
jobs
that
are
running
like
verifying
pool
requests
and
other
jobs
here.
So
we
must
not
run
it
too
frequently.
C
Of
course,
right
now
the
off-balance
load
balancers
clean,
cleaner,
runs
once
a
day
and
the
other
cleaners
runs
somewhere
from
between
2
and
4
hours.
There
are
spread
in
time
so
that
they
do
not
run
at
the
same
moment.
So
you
can
see
it
here
and
the
example
run
of,
for
example,
VM
virtual
machines
cleaner
from
today.
This
one
actually
cleans
something
the
lock
is
here.
It
was
yeah,
it
cleans
19
virtual
machines
instances
because
it
was
the
first
round
the
tooth
was
merged
today,
but
the
rest
of
them.
C
Probably,
if
you
look
at
the
the
logs,
you
will
see
nothing
like
found
no
disk
to
delete.
This
is
normal
because
they
run
frequently
usually
did
its
yeah.
The
resources
are
not
there.
They
are
cleaned
by
the
previous
job
and,
what's
not
here
yet
are
two
resources
that
the
concert
administrator
must
remember
about
IP
addresses
and
DNS
records.
We
still
create
them
by
our
integration
job.
So
the
the
reason
there
are
not
cleaners
for
photos
is
as
follows.
C
We
are
right
now
evaluating
the
other
approach
to
install
and
test
kheema
based
on
X
Y
P
IO,
a
service
which
kind
of
gives
us
a
way
to
set
up
Keima
without
actually
making
it
then
DNS
entry.
That
means
we
will
not
have
to
allocate
IP
address
and
DNS
record,
probably
very
soon.
In
like
two
weeks
there
is
already
a
task
schedule
the
task
prepared
for
that.
C
We
will
try
to
provide
a
proud
job
for
integration
testing
using
that
mechanism,
and
if
this
works-
and
we
we
hope
we
are
pretty
sure
it
will
work,
then
we
won't
allocate
IP
addresses
and
DNS
records
and
the
more
so
there
won't
be.
The
need
for
having
a
cleaner
will
also
disappear.
So
that's
why
there
is
no
cleaner
afford
it
for
the
time
being,
for
these
two
weeks,
I
think
we
can
live
with
what
we
have
now
and
manual.
Cleanup
of
these
two
resources,
perhaps
is
necessary,
but
you
don't.
C
There
is
no
need
to
test
it
every
day
or
verify
it
every
day.
Perhaps
two
times
a
week
is
enough
just
to
ensure
there
are
no
like
100
DNS
entries
or
something
like
that.
Even
if
so
very
easy
to
delete
it
from
the
GCP
console,
so
that's
the
status
for
the
jobs
and
one
more
thing
about
how
we
can
approach
a
situation
like
we
had
with
the
screeners
when
we
have
a
developer.
That
is
no
longer
an
admin.
So,
for
example,
I
was
preparing
this
VMs
cleaner
virtual
machines
instance
cleaner.
C
But
since
we
switched
to
a
production
cluster,
a
real
production
cluster
this
week,
I
wasn't
an
admin
anymore,
so
I
wasn't
really
able
to
test
my
two
in
a
production
until
it's
merged.
So
this,
as
you
can
imagine,
provide
some
risk
in
how
to
approach
it,
and
we
chose
a
solution
that
maybe
I
will
show
you
very
quickly.
C
Essentially
it's
a
two
stage,
two
stage
approach.
In
first
pull
request:
we
just
provide
a
tool
or
a
job
or
whatever,
but
it
runs
in
dry
run
mode.
So
we
do
a
pull
request.
We
can
run
it
locally
against
some
cluster,
some
mock
data
or
whatever,
and
we
ensure
that
the
dry
run
mode
is
such
a
tool
is,
is
handled
correctly.
Then
we
can
merge
it
and
in
a
cluster-
and
we
know
it
runs
in
dry,
run
mode.
C
So
if
we
miss
configured
something
or
we
forget
about
something,
it
wanted
to
any
harm,
at
least
the
worst.
It
will
not
work
yes,
but
it
will
not
cause
any
troubles
and
if
it
runs
fine
after
few
runs,
we
we
just
switch
it
to
production
configuration
I
can
show
you
how
it
was
done
here.
The
switch
the
production
cone
Eurasian
essentially
recur.
What
has
changed
is
the
chrome
expression
so
that
in
dry
run
not
it
runs
more
frequently
every
like
ten
minutes
or
something
like
that
to
get
feedback
faster
and,
of
course,
drying.
C
One
was
true
in
a
production
config,
we
switched
the
schedule
to
some
real
value
like
every
four
hours
or
something
like
that
and
dry
run
is
turned
to
fall,
so
we
turning
turning
it
on
somehow
yeah,
maybe
so
that
it
will
yeah
do
real
job.
So
that
is
an
approach
that
work,
but
it
worked
yeah
essentially
so
we
can
in
the
future,
when
we
have
a
situation,
that
the
developer
is
no
longer
an
admin
and
he
if
he
doesn't
or
she
doesn't
have
access
to
the
class
ster
anymore.
C
B
A
B
Can
you
see
my
skin?
Yes,
one
moment?
Oh
okay,
so
yesterday
we
have
a
90
discussion
with
guys
from
prowl
with
Adam
and
because
there
is
a
problem
with
setting
the
default
resources
request
and
limits
for
jobs.
In
fact,
it
is
not
possible
in
our
configuration
also
in
our
configuration,
because
we
was
thinking
about
defining
the
limit
range
for
the
nice
person
where
the
jobs
will
be
to
guard,
but
when
we
define
the
enemy
trench,
but
default
values
are
also
propagated,
cloners,
init,
upload
and
other
it
containers
inside
car.
B
So
in
fact,
pod
will
use
five
times
more
resources
that
that
we
want-
and
we
discussed
it
if
people
with
from
prowl
and
there
was
a
resupplies-
they
have
the
same
issue
it
they
didn't
knows
it
not.
Is
it
and
after
discussion
we
decided
to
create
the
issue
and
we
al
free
to
make
I
cannot
contribute
all.
Maybe
they
will
fix
it
so,
but
this
is
really
important
for
us
because
it
have
a
big
impact
on
auto
scaling
and
faster
performance.