►
From YouTube: Kyma Prow Migration WG meeting 20181130
Description
Meeting notes: https://docs.google.com/document/d/1ljEAoCBJXlxx_ATPyvKZ1KoyFOSIBzEAOkN-2H-HhUY/edit
A
A
A
Okay,
perfect,
is
you
see?
We
don't
have
many
agenda
items
today,
so
it's
going
to
be
just
me,
I'm,
going
over
the
status
and
telling
you
about
the
current
Sonics
priorities
we
have
and
then,
if
you
have
any
questions
or
any
items
to
discuss,
please
let
us
know
so.
Just
looking
over
the
last
week,
we
had
a
pretty
productive
week
actually
and
we
had
twenty
mercifully
pull
requests
merged
and
there
are
eleven
more.
They
are
all
waiting
for
to
be
merged
and
most
of
them
are
about
the
migration
of
the
components.
A
Just
like
we
told
in
the
last
week's
meeting.
Let
me
show
you
to
us
last
week's
goals
so
that
there
are
four
goals
that
we
determined
last
week.
The
first
one
was
helping
in
migration.
Well,
for
that
we
had
two
different
meetings:
one
in
glitzy
and
one
in
unique
and
in
those
meetings
we
we
explained
the
migration
guides,
how
to
migrate
one
component
from
our
internal
schedules
which
to
prowl
for
the
people
from
kima
and
after
that
they
started
with
the
migration.
A
So
directly
from
this,
it
can
be
checked
by
our
army
right
now,
with
the
migrations
and
from
this
command.
Actually
we
see
that-
and
there
are
a
lot
of
components
that
are
in
progress
of
migration
right
now,
which
is
pretty
good,
and
there
are
three
other
three
other
goals
that
we
had
enabling
access
to.
The
locks
which
is
still
in
progress
and
enabling
matrix
for
the
proud
class
are
also
G
class
project
for
prowl.
They
are
all
still
in
progress.
A
We
have
several
items
in
the
to
accept
column,
so
these
are
the
ones
that
are
has
been
merged.
What
waiting
for
the
leaders
revealed?
So
they
are
mostly
about
migrating
components,
but
there
are
a
couple
of
more
important
things,
for
example,
assuring
clean
clean
up
of
via
persistent
volumes.
Well,
this
was
a
big
problem,
because
every
time
we
created
the
ministry
Varna
test
cluster
on
gke
when
we
delete
them,
the
disks
were
still
there,
so
this
led
to
leaked
resources,
but
now,
thanks
to
Tomic
PR
there,
there
is
a
way
to
delete
them.
A
So,
every
time
while
did
the
provisioning,
the
cluster,
we
are
now
deleting
two
disks
as
well
other
than
that,
and
there
is
one
more
important
thing:
any
ability
to
updating
secrets.
So
for
so
far
we
we
had
the
secret
to
create
secrets
on
Brown
cluster,
but
we
couldn't
use
the
same
script
to
update
them
as
well,
and
now
it
is
also
possible
and
other
than
that.
It's
mostly
migrations.
A
There
are
a
couple
of
items
from
last
week.
Also
there
is
this
thing:
it's
just
to
improve
the
security
level.
We
have,
we
have
increased
the
token
size
from
20
bytes
to
30
32
bytes.
So
this
was
one
of
the
topics
resulted
from
the
threat.
Modeling
sessions
with
security,
team
and
I.
Think
that's
that's
pretty
much.
It.
A
A
B
Perhaps
not
the
question,
but
just
a
comment
about
our
resources.
We've
mentioned
this
request.
It's
a
first
step
towards
I
would
say
stopping
resource
leaking
that
we
have
currently
and,
of
course
the
problem
is
when
the
pot
is
terminated.
For
example,
our
cleanup
is
executed
here
we
are
working
on
that
and
a
yahoo
ashtray
is
working
on
similar
cleaning
for
for
network
resources,
which
are
not
that
easy
to
actually
not
that
easy
to
find
so.
A
B
Is
not
red
eared,
so
we
still
have
to
delete
them
manually
I
would
say
daily
scent
of
much
object
we
can
for
for
the
time
being,
we
are
able
to
keep
to
keep
this
within
some.
You
know
limit,
but
ultimately
we
need,
of
course.
A
Yeah
I
think
we
need
to
create
a
periodic
table
or
something
to
detect
undelete
toast
with
resources
some
points
and
by
the
way
does
it
an
is
the
normal
issue
in
Google
Cloud.
That's
is
this.
B
A
B
Yes,
it's
the
Kuster's
clean
up
after
themselves,
so
to
speak,
but
when
there
is
a
dynamic,
dynamic
provisioning,
when
some
deployment
in
the
Burnet
is
faster
requires
something
like
load
balancer,
for
example
PPC,
then
the
underlying
G
cloud
infrastructure
provisions
this
on-demand
dynamically
and
this
beings.
This
object
resources
are
not
automatically
removed
and
yeah.
It's
it's
it's
a
known
fact,
and
not.
B
The
problem
here
is
that
there
is
no
very
easy
and
straightforward
way
to
find
those
resources.
That's
the
the
biggest
problem,
because
you're
working
on
that
and
we've
already
found
some
discussions,
and
even
there
is
a
script
on
github
that
addresses
very
similar
issue,
mainly
to
find
our
front
load
balancers.
So
others
have
also
encountered
this
problem
and
yeah.
So
I
would
say
it's
a
non-issue
with
Google
and,
for
example,
some
mechanism
in
Google
cloud
like
labeling,
is
also
not
completely
implemented,
so
we
could
probably
label
those
objects.
B
Somehow,
if
we
could
well,
the
cluster
is
still
running,
for
example,
but
not
all
the
resources
allow
label
week,
I
would
say
most
of
them
do
not
so
yeah.
That's
that's.
That
makes
the
problem
even
even
bigger.
We
have
to
use
different
techniques.
Different
optical
API
calls
it's
not
standardized
and
straightforward
way
to
have
the
complete
solution.
So
right
now
we
are
improvising
a
bit
and
finding
yeah
something
that
works.
A
A
All
right
and
for
the
upcoming
weeks,
I
think
there
will
be
a
lot
of
components
again
being
migrated,
so
our
first
call
would
speeds
will
help
people
to
migrate
components
and
also
will
continue
its
in
solving
the
issues
about
our
current
prowl
setup
or
in
better
to
say
improving
status.
That's
all
we
have
yes.
B
A
We
have
a
timeline
for
it.
Every
time
line
right
after
being
done
with
the
migration
of
the
component
stand.
The
next
big
topic
I
will
be
releasing
some
okay
good,
so
until
the
end
of
this
year,
hopefully
it
can
be
done
with
it,
so
that
in
mid-january
we
can
make
the
next
three
days
of
kima
using
prowl.
So
that's
the
happy
pets.
B
Yes,
that
would
be
great
if
we
can
do
it
within
this
time
line,
then
it
would
be
great
because
we
have
already
we've
already
heard
some
messages
about
our
existing
infrastructure
is
go.
Infrastructure
is
going
to
be
turned
off
at
some
unknown
time
next
year,
so
we
have
to
do
it
as
fast
as
we
can
mm-hmm.
A
B
B
C
The
immigration
will
have
to
Pro
clusters
and
while
I
will
be
working
on
stabilize
the
new
cluster
on
the
new
project,
then
the
Pope
will
be
work
and
when
it
be
stabilized,
the
meeting
great
me
related
to
the
new
and
right
now
I
finishing
the
task
of
me:
muttering
I'm,
still
working
on
it
and
then
I
have
a
plan
to
make
the
create
the
new
cluster
in
the
next
week.
Let's
say
that
Tuesday
Wednesday
but
I
hope
we'll
have
to
do
this
time.
C
B
C
C
B
One
thing
that
comes
to
my
mind
about
this
should
be
addressed
by
the
latest,
who
requests
that
I
merged
is
potential
name
clashes,
because
if
you
have
two
same
jobs
running
in
the
old
classes
and
this
new
that's
intended
introduction.
If
some
resources
or
names
are
calculated
identically
in
the
jobs
that
there
could
be
a
clash
in
the
g-cloud,
fortunately
yeah,
so
take
this
into
consideration.
I
will
also
think
about
it
to
avoid
such
a
situation
so
that
all.
C
Be
that
problem,
because
the
new
class
review
work
in
a
different
project
like
I
said
and
also
you
will
have
a
different
domain.
The
root
of
the
domain
will
be
differently,
so
I
was
thinking
about
it
and
also
on
the
new
cluster,
the
average
jobs
we
have.
The
Skip
report
to
true
do
not
duplicate
the
notifications.
To
pick
up
and
logging
I
mean
this
packet
packet
will
be
a
new
project
and
it
will
be
also
a
new
packet
and.