►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
last
time
we
discussed
a
list
of
items,
most
of
them
are
either
done
or
I
have
created
backlog
items
in
in
our
sap
backlog
for
handling
things.
A
Yeah
good
news
is
our
all
good
news.
Cf
deployment
release
process
works,
so
we
have
I,
think
cut
now
three
releases
also
in
compatible
one.
So
this
is
all
really
working
very
nicely
and
it's
also
pretty
much
fully
automated.
You
just
have
to
inspect
the
changes
and
decide
what
to
release.
The
only
thing
that
is
not
so
nice
is
that
some
of
the
jobs
here
are
still
yeah
a
bit
flaky
and
we
see
Bosch,
timeouts
and
failing
cats
tests.
B
A
Then
this
should
become
better
okay,
but
okay,
so
for
the
up
timer
there
is
a
little
bit
of
a
mess
at
the
moment.
So
we
have
one
pull
request
from
July,
which
migrates
to
go
modules
and
yeah.
Maybe
I
should
have
already
merged
it,
because
now
it
conflicts
with
this
corifi
pull
request
which
makes
changes
in
the
vendor
dependencies.
So
yeah
looks
not
so
nice,
I
I,
think
the.
B
Only
difference
between
these
two
is
the
first
one
is
updating
from
depth
to
grow,
mod
and
keeping
the
vendor
directory,
and
the
new
one
is
doing
the
same
thing,
but
deleting
the
vendor
directory.
So
we
really
just
need
to
decide.
Do
we
want
to
keep
the
vendor
directory
or
not.
A
So
we
to
my
understanding
if
we
have
a
go
mod
module
definition,
we
don't
need
vendors
dependencies
anymore.
C
B
C
Maybe
yeah
I
mean
then
you
just
win
a
binary
right
right,
I,
don't
think!
There's.
If
you
already
have
everything
vendored,
then
I
don't
think,
go
run
and
running.
The
binary
is
going
to
do
much
in
terms
of
the
actual
startup,
because
go
compilation
is
really
fast,
but
passing
a
binary
around
is
a
lot
lighter
than
passing
an
entire
repository
around.
So.
A
C
B
A
A
B
A
B
C
A
A
Yeah,
okay,
that
should
be
should
be
feasible.
Yeah.
Then
next
thing
was
the
CF
test
helpers.
Here
we
still
want
to
cut.
C
No,
it
used
to
be
tested
essentially
by
being
immediately
pulled
into
CF
smoke
tests
and
CF
acceptance
tests,
and
then
we
see
and
then
CF
acceptance,
test,
CI
and
CF
smoke
test
CI
would
run
through.
C
We
stopped
doing
that
because,
anytime,
we
introduced
a
like
a
bug
into
CF
test
helpers
or
like
had
you
know,
changes
that
built
off
each
other.
Everything
would
break
without
having
a
chance
to
make
the
changes
and
see
if
acceptance,
tests
or
CF
test
helpers,
since
we
made
that
switch
though
I
I've
just
been
releasing
it
with.
You
know,
git
tag,
git
push
tags
and
then
cutting
manual
releases.
Github
actions
is
pretty
comprehensive
for
testing
it,
but
it
probably
would
be
better
to
have
a
concourse
pipeline.
C
Yeah
I
think
in
all
the
individual
packages
there
are.
There
are
oh.
A
C
I
think
there
was
a
a
question
in
my
mind
of
whether
we
should
be
cutting
real
sember
releases.
For
this
thing
or
whether
we
should
just
say
this
thing
is
always
subject
to
change
depending
on
our
needs
and
and
it's
not
actually
and
make
it
just
like
minor
releases
every
single
time.
But
given
we've
already
started
down
the
summer
Road,
it's
probably
fine
to
keep
going
until
we
see
a
reason
to
switch
that
are.
C
C
B
It
breaking
or
not,
if
you're
doing
it
automatically,
then
you
kind
of
have
to
say.
Oh,
yes,
I'm
merging
this
change.
Let
me
quickly
bump
to
the
next
major
before
the
automatic
job
runs
so.
C
C
B
We'll
still
get
the
the
PRS
for
free
it's
just
only
when
we
decide
to
kind
of
release.
Presumably
that's
true,
so
I
mean
we
can
always
start
manual
and
if
we
feel
like
it's
we're
forgetting
and
it's
too
painful,
we
can
switch
to
automatic.
A
A
Yeah,
okay,
good,
then,
let's
start
with
a
manual
process
and
I
mean
it's
better
than
than
what
we
have
now
and
see.
If
that
works
out
or
not
yeah.
A
Okay,
good,
then
biggest
topic
for
today
and
the
most
important
one
is
the
Sea
of
Linux
fs4
migration.
Stefan
has
already
prepared
a
few
notes.
So
what
we
already
have
is
this
yeah,
more
or
less
Empty
issue
open
from
Greg
on
September
13th,
so
I
guess
I
think
he
opened
similar
issues
on
the
other
projects
yeah
and
we
are
supposed
to
migrate
to
fs4.
A
So
the
the
rough
idea
is
that
we
provide
experimental,
Ops
files
for
deploying
fs4
for
adding
the
fs4
stack
to
see
if
deployment
and
possibly
making
this
the
default
stack,
and
we
should
validate
this
with
the
acceptance
tests
which
can
be
configured
you
use,
fs4
and
yeah.
What
we
need
is
all
build
packs
to
be
compatible
with
fs4
Stefan.
Do
you
want
to
show
yeah
my
screen,
because
today.
D
C
D
B
D
A
D
Okay,
so
I
I
went
through
all
this
CF
Linux
fs4
to
see
what
is
actually
already
available
and
what
not,
because
I
have
the
feeling
there's
some
work
done.
Some
work
is
missing
and
I,
don't
know
what
are
the
next
steps.
So
what
we
do
have
is
a
sea
of
Linux
fs4,
release
switches.
D
There
are
since
quite
some
time,
but
this
one
alone
is
not
of
much
use.
Then
I
went
through
all
the
build
packs
and
I
see
that,
let's
say
half
of
the
build
packs
have
a
CF,
Linux
fs4
support,
but
there's
another
half
that
doesn't
have
it
from
the
list
of
build
packs
that
are
included
in
CF
deployment.
So
the
first
question
is:
let's
say
how
to
continue
here.
D
One
possibility
is,
of
course,
to
open
simply
tickets
or
issues
on
all
these
build
pack
projects
and
simply
ask
about
the
plan
what
we
could
do,
independent
of
whether
all
the
build
packs
are
available.
We
could
already,
let's
say,
start
with
some
experimental
Ops
files
to
include
CF
Linux
fs4
answer
supported,
build
packs
right
away,
right,
I
mean
then
yeah.
So,
for
instance,
a
static
built
by.
D
Build
pack
missing
for
now,
but
as
soon
as
it
is
ready,
we
can
yeah
edit
again
would
be
one
option.
Another
question
that
came
up
to
me
is
the
strategy
of
the
build
packs.
This
time
is
to
have
at
least
on
the
project
level,
one
build
pack
that
is
supposed
to
handle
a
CF,
Linux,
fs3
and
fs4.
So
it's
actually
the
same
binary.
If
I
look
into
the
Bosch
releases,
then
out
of
the
same
sources,
there
are
still
two
different
packages
built
one
four
three
and
one
for
four.
D
So
the
question
here
is:
do
we
want
to
upload
both
of
them,
as
it
was
done
from
CF
Linux
FS2
to
3,
but
here
I
think
the
build
packs
had
really
they
were
somehow
different.
This
is
something
I,
don't
know.
If
we
just
continue
as
projects
are
structured,
we
would
upload
a
the
build
packs
two
times
and
if
I
understood
it
correctly,
it
would
be
exactly
the
same.
D
D
Yeah,
that's
right
from
the
from
the
same
push
release.
Then
you
have
the
two
packages,
but
in
the
end
of
the
day,
it
means
that
two
times
the
build
pack
is
uploaded
via
the
cloud
controller
to
the
yeah
into
the
into
Cloud
Foundry,
which.
C
D
A
lot
of
upload
and
if
you
have
offline,
build
packs,
they
are
really
huge.
B
D
No,
no
I
think
it's
independent.
Of
course
you
have
two
stacks.
First
of
all,
they
are
both
there
and
you
can
pick
and
choose
in
your
manifest
or
whatever,
when
you
push
an
application,
but
then
a
build
pack
is
selected
and
if
I
remember
correctly,
the
selection
mechanism
looks
if
the
build
pack
is,
let's
say,
attacked
with
a
stack
and
then
it
chooses
either
the
three
or
the
four
version
of
it.
B
Yeah
I
was
just
thinking
about
it
more
from
a
from
a
platform
operating
perspective
of
how
they
create
a
platform
that
both
Stacks
are
available,
but
I
guess
you
can
just
co-locate
both
onto
the
appropriate
instances
and
then
they'd
both
be
available
at
that
point.
D
C
If
it's,
if
it's
just
the
Java,
build
pack,
that's
special,
maybe
this
is
related
to
the
like
Auto
wiring
stuff.
That's
going
down
I've
been
hearing
something
about
how
the
Java
build
pack
is
changing
some
of
their
default
Auto
configuration
things,
there's
been
warnings
for
some
time
and
that's
that's-
that's
kicking
in
starting
recently.
Maybe
there's
they
have
two
different
build
packs,
because
one
is
tagged
in
such
a
way
that
it's
completely
turned
off.
It's.
D
It's
for
all
of
them,
it's
just
one
release
and
if
I
go
to
the
Java
big
project,
it's
just
one
of
these
nothing
more
and
they
specify
it
for
three
and
four.
And
then,
if
you
go
into
the
Java,
build
pack
Bosch
release,
then
you
suddenly
see
that
out
of
the
same
sources,
the
Java
build
pack,
two
different
packages
are
created
just
by
copying
the
same
stuff.
We
can.
We
can
look
inside
sure.
D
D
D
I,
no
idea
I
think
it
came
from
the
from
four
years
ago.
There
we
had
really
different
build
packs.
It
was
a
different
strategy
and
there
were
really,
as
it
took
a
long
time
to
to
implement
this
mechanism
and
so
yeah.
This
is
something
we
need
to
decide
how
to.
We
could
continue
with
the
existing
mechanism
and
then
it's
just
some
overhead
and
I
mean
after
CF
Linux
fs3
gets
deleted.
All
the
stuff
goes
away.
D
D
B
D
Ask
and
maybe
invite
to
a
meeting
with
Greg
and
to
discuss
the
next
steps.
I,
don't
know
if
you
have
heard
some
plans
Etc
within
VMware,
but.
D
Because
the
worst
thing
that
could
happen
is
that
the
build
pack
folks
are
writing
and
saying:
okay.
Well,
we
have
done
everything
so
stickers
there,
the
build
packs
are
there.
We
are
waiting
and
we
and
CF
deployment.
We
do
the
same
yeah.
We
we
expect
somehow
that
somebody
comes
up
with
the
integration
in
CS
deployment,
but
nothing
happens
and
then
time
passes
and
yeah.
D
About
what
is
the
current
status
and
what
would
be
the
next
steps
good
and
then,
regarding
the
integration
into
CSD
deployment,
I
mean
we
could
I
think
we
could
already
today
start
with
adding
some
experimental
op
files
that
chips,
CF
looks
fs4
and
the
list
of
supported
build
packs
as
opt-in,
it's
just
there.
It
could
be
used
and
then
we
could
also
have
a
second
experimental
build
file
that
adds
or
that
switches
you
have
Linux
fs4
as
default
stack.
D
That's
not
because
I
want
to
do
it
today,
but
it's
just
so
that
we
can
run
Cuts
against
them,
and
that
would
be
then
yeah
the
question.
How
would
we
start
with
testing
to
see
how
far
we
are,
meanwhile
with
CF
Linux
fs4?
D
So
there
are
two
options
we
could,
because
it's
experimental,
apps
Ops
files.
We
could
simply
add
it
to
this
experimental
run
in
CS
deployment.
However,
as
long
as
there
is
a
long
list
of
missing
build
packs,
I'm,
not
sure
whether
we
have
the
slightest
chance
to
get
the
cut
screen
and
then
it
would
block
development.
So
maybe
it's
better
to
have
a
separate,
Pipeline
and
start
from
there
than
we
see
all
the
failing
stuff
and
could
again
open
box
and
and
go
from
there.
D
So
maybe
that's
a
the
more
realistic
way
to
have
a
separate
pipeline
similar
to
what
we
had
with
Jimmy
in
the
beginning.
Now,
with
a
bionic,
just
a
small
thing,
with
cats
running
and
then
I
guess,
we
should
really
open
box
on
everything
that
fails,
including
the
missing,
build
packs,
I.
D
Yeah
and
to
have
to
see
whether
the
stack
runs
at
all
I
mean.
That
would
be
interesting.
Things
like
we
do
have,
let's
say
a
few
major
bit
packs
where
at
least
some
cats
should
succeed.
I
mean
the
arbit
pack
well,
who
cares
right
so
much,
there's
for
sure
a
certain
test,
I,
don't
know
how
good
configurable
the
cats
are,
whether
we
could
let's
say,
exclude
certain
build
packs,
but
of
course
there
are
also
other
cross-cutting
concern.
D
I
could
imagine
that
the
engine
X
or
binary
or
static
file
build
pack
are
used
because
they
are
the
possible
text
that
exists,
so
maybe
they
are
and
then
of
course,
a
lot
of
stuff
fails,
but
yeah.
If
we
have
at
least
one
or
two
Java
python
go
applications
working.
Then
we
already
know
that
this
is
goes
in
the
right
direction.
C
To
two
experimental
Ops
Ops
files
make
sense
to
me
at
the
very
least
beyond
that
yeah
definitely
shouldn't
go
in
experimental
Hermione,
while
it's
going
to
continuously
fail
pipelines
or
manual
tests,
some
down
for
down
for
either
in
the
interim.
D
Okay,
then
I
would
somehow
at
least
the
table
part
of
this
document
put
into
Greg's
bug
just
to
to
make
it
a
bit
more
visible
what
we
are
doing,
and
then
we
could
start
to
work
on
these
experimental,
Ops
files
and
probably
a
separate
pipeline
to
to
see
what
is
running
and
what
is
not
running,
and
then
we
go
from
there.
D
I,
don't
know
how
how
it
works
out
for
for
your
Cloud
Foundry
installations.
From
from,
let's
say,
sap
points
of
view,
we
would
really
like
to
have
the
Sea
of
Linux
for
our
support
as
soon
as
possible,
so
that
we
can
offer
it
as
opt-in
and
give
our
customers,
let's
say
a
chance
to
to
learn
about
it
and
and
to
check
the
migrations,
because
our
plan
is
still
that
with
the
when
bionic
expires
from
standard
support.
That
means
April
May.
D
We
really
want
to
delete
it
and
we
can
only
do
it
if
we
offered
a
little
bit
ahead
of
time
the
option
to
to
migrate
over.
D
That's
also,
that's
why
I'm
a
bit
nervous
when
we
don't
have
anything
on
on
sea
of
Linux
fs4,
it's
no
already
close.
Yeah
end
of
the
year,
but
yep
I
mean
there
are
a
few
steps
that
we
from
the
working
group
can
do,
are
not
blocked
and
then
the
build
pack
support
yeah,
that's
something!
That's
like
it's.
B
I
just
checked
one
of
the
CI
environments
that
this
happens
to
be
running
at
the
moment
and
on
the
API
node
I
do
see
all
of
the
CF
Linux
fs4
packages
already
alongside
the
CF
Linux
fs31,
so
compiling
each
of
the
build
packs
it's
pulling
everything
in.
B
D
But
it
means
that
we
specify,
then
the
build
packs
twice
right
once
for
in.
D
D
So
we
yeah
and
then
we
keep
with
the
existing
setup
that
that
makes
it
easier
for
now.
A
B
C
D
Simply
in
this
environment,
we
switch
yes
for
as
a
default
seconds
and
cats
will
run
against
it
right.
What
definitely
definitely.
A
A
To
I
think
it
allows
to
Define
and
a
specific
stack,
but
maybe
not
both.
D
Yeah,
let's
see
this
can
be
done
in
the
implementation.
What
is
easier,
I
mean
either
enable
those
Ops
files
which
over
or
configure
cuts
for
this
one
explicitly
poses
probably
possible
okay,
and
there
was
one
minor
thing:
I
just
wanted
to
raise
your
attention
because
that's
currently
discussed
in
the
Toc,
where
is
it
yeah
in
the
TOC?
You.
A
D
Have
heard
of
this,
we
discussed
a
schema
for
default.
Git
Branch
protection.
That's
now
in
the
final
week
of
comments.
So
if
you
I
don't
know
in
CF
deployment,
is
that
I
have
some
special
setup?
I,
don't
think
so.
It
should
work,
but
just
yeah
to
point
you
to
that,
because
I
will
probably
start
tomorrow
or
yeah
in
the
next
week
to
start
with
the
automation
of
the
settings
similar
to
the
GitHub
team,
Automation
and
then
one
day
we
will
apply
those
rules.
D
D
A
D
All
so
because
the
general
recommendation
from
the
TOC
is
simply
that
we
switch
to
the
pr
workflow
yeah
and
there
was
a
bit
of
discussion.
I
mean
it's:
it's
not
that
strict
right.
We,
we
simply
configure
the
default
branch
and
version
branches.
If
they
exist,
I
mean
by
naming
pattern
and
force
that
you
work
with
PRS.
The
Bots
can,
of
course,
still
commit
directly,
but
only
the
Bots
nobody
else,
and
we
will
configure
a
number
of
approvers
approvals
needed
on
these
PRS
depending
on
the
size
of
the
area.
D
The
group
area,
because
there
are
very
small
ones
where
yeah
you,
you
need
to
be
able
to
work
and
if
it's
a
bit
bigger,
so
I
guess
for
CF
deployment.
That
means
we
need
per
pr1
approval.
D
Guess
so
at
least
I
haven't
looked
at
this
tool
in
detail
yet
I
have
to
say
this
is
again
one
of
these
kubernetes
automation
tools
and,
if
I
remember
correctly,
they
applied
for
the
settings
that
the
tool
can
handle.
These
are
really
applied
and
it
consolidates
to
the
states
that
you
configured
in
in
git.
D
C
D
Okay
and
let's
do
it
like
this
I,
don't
know
yet
how
the
configuration
will
really
be
structured
and
where,
in
the
community
repo
it
can
be
configured
once
I
have
some
ideas
and
a
little
bit
Automation
in
place
and
before
it
gets
applied.
I'm
pretty
sure
we
will
talk
again
in
one
of
the
work
group
meetings
and
and
see
yeah.
B
D
We
can
handle
special
configuration
if
you
want
to
have
more
strict
I
think
nobody
will
object.
If
you
want
to
have
free
commits
for
everyone,
then
probably
there
will
be
a
veto,
but
I
guess
that's
not
the
problem.
I.
B
B
D
Come
home,
these
requirements
will
come
also
from
other
teams,
so
we
need
somehow
a
way
to
overwrite
that
we
probably
for
the
most
of
the
projects
that
will
be
generated
automatically
and
then
maybe,
if
you
have
for
your
project,
a
special
configuration
in
this
files
and
this
one
is
taking
and
right
yeah.
Maybe
some
minimum
checks
that
your
configuration
is
not
lighter
than
the
ones
that
would
be
generated.
Something
like
that.
I
I,
don't
know
yet
all.
A
C
A
B
Items
to
to
do
other
things
that
you
particularly
want
to
to
pick
up,
or
are
there
things
that
you
want
us
to
pick
up?
How
do
we
want
to
handle
prioritization
and
actually
tackling
these?
These.
A
A
A
B
A
Of
the
last
meetings
and
I
mean,
if
you
like,
you
can
of
course
do
this.
This
is
for
finding
the
optimal
settings
of
the
cat's
tests
and
I
am.
These
are
still
set
up
with
CF
tools,
myths
right,
okay,
so
we
need
a
new
account.
A
B
B
C
And
the
that
pipeline
was
running
using
toolsmith's
environments
which
updated
to
in
the
interim
of
somewhere
in
the
middle
of
the
runs.
The
toolsmiths
environments
were
actually
updated
to
use
Ops
files
that
rendered
our
testing
not
good,
like
our
tests
were
running
on
more
powerful
environments
than
we
thought
they
were
running
on.
C
The
bottom
line
is
that
we
should
not
be
including
I
think
all
of
our
all
of
our
was
it.
Multiple
fan
outs
in
the
cfd
pipeline
still
set
the
timeout
scale
to
two
and
that's
how
they're
passing
and
we
should
either
move
that
timeout
scale
back
to
cats
or
like
bump
up
cfd,
to
make
it
more
powerful
and
pass
without
bumping
up
the
timeout
scale.
C
And
that
timeout
scales
thing
I
still
have
hope
for
it,
but
we
don't
have
to
go
down
that
route.
I
think
the
we
just
need
to
decide
what
the
best
option
is,
and
if
you
have
a
better
idea
than
running
this
Pipeline
on
all
yours.
B
No,
that's
fair,
I
thought
this
was
I.
I
thought
we'd
gotten
a
good
run
from
this,
and
it
was
we've
gotten
the
information
we
needed
from
it,
but
that
doesn't
sound.
Like
that's
the
case.
C
A
The
main
problem
with
cats
that
we
have
is
is
not
the
time
it
takes
to
complete
one
run.
This
is
more
or
less
always
half
an
hour.
The
problem
we
rather
have
is
that
they
are
still
rather
unstable,
like
here
so
I
mean
40
minutes,
40
minutes
whatever.
If
it's
stable,
it's
fine,
it
can
run
an
hour,
but
I
have
the
feeling
we
should
probably
focus
more
on
the
flaky
tests
and
and
try
to
get
those
to
to
stabilize
them
so
that
we
can
come
to
really
good
reproducible
results.
D
Is
that
because
Concourse
and
the
environments
run
in
different
regions
or
I
mean
this
is
something
we
wanted
to
address
right
or
as
a
soon?
Yes?.
C
So
yeah.
B
C
A
B
A
That's
it's
often
failing
with
Bosch
timeouts
I
mean
the
obvious
Improvement
is
that
we
run
two
attempts
for
Boss
deploy
without
deleting
everything
again
so.
C
C
A
Yeah,
okay,
good,
but
so
I
mean
the
cats
set
up,
is
some
somewhat
optimized
with
number
of
nodes
and
and
these
parameters
and
I
mean,
of
course
we
could
continue
to
fine-tune
that,
but
I
also
not
sure
if
it's
really
necessary.
A
Environments
that
still
need
to
be
migrated.
I
can
I,
can
transfer
this
to
an
yeah
yeah.
B
A
Copy
this
to
a
GitHub
issue,
and
then
you
can
yeah
when
these
are
all
migrated.
I
think
we
we
should
be
fine
for
now.
Yeah.
A
B
D
Okay,
okay,
yeah
yeah.
We.
B
B
If
we're
going
to
keep
that,
then
I
think
we
can
merge
the
first
one
I
can
ask
Julian
to
rebase
the
second
one
which
should
make
that
a
lot
smaller
and
then
it
would
be
a
lot
easier
to
reason
about
that.
A
B
B
Pretty
much
everything
is
is
because
I
think
there's
maybe
only
jump
box
deployment
is
the
only
pipeline
that
might
still
be
running,
although
that
it
doesn't
look
happy
at
the
moment.
For
some
reason,
yeah.
A
B
I
thought
so,
but
okay
I'll
have
to
look
at
that.
That's
fine,
but
yes,
I
think
we
are
very
close
to
being
able
to
just
tear
this
environment
down
so
I'm,
not
too
concerned
about
trying
to
resurrect
those.