►
From YouTube: Cloud Foundry for Kubernetes SIG [Sept 29, 2020}
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
B
E
A
B
B
B
Okay,
then,
for
past
start
of
the
meeting,
welcome
everybody
to
this
week's
special
interest
group
call
for
cloud
foundry
and
kubernetes.
B
This
time
we
actually
have
two
topics
and
therefore
speakers
we
have
andrew
talking
about
the
topic
of
image
ownership,
and
then
we
have
flood
giving
us
an
update
on
cube,
cf
2.5,
loosely
discussed
is
that,
like
both
topics
should
take
around
half
an
hour,
so
I
guess
we're
going
to
get
started
with
andrew.
A
E
Much
I'm
andrew
whitrock,
I'm
on
the
release,
integration
team
under
the
vmware,
pivotal
organization
and
I'm
one
of
the
engineers
that
is
sort
of
taking
charge
on
this
initiative,
which
is
trying
to
figure
out
the
question
of
like
image
ownership
provenance.
E
And
how
can
we
like
build
the
image
images
for
the
platform,
so
the
general
problem
space
is,
we
have
see
for
kate's
as
a
product
platform
and
the
contract
is
we
bring
in
what
amounts
to
kubernetes
yaml
from
a
bunch
of
different
sources,
and
those
sources
might
be.
Teams
at
pivotal
might
be
teams
at
sap
or
souza
or
community
at
large,
or
they
could
be
just
additional
components
that
come
in
through
various
projects
that
are
useful
and
are
kubernetes
standards.
E
But
unfortunately,
that
only
gets
us
as
far
as
running
it,
because
we
have
what
is
a
collection
of
oci
compliant
images
that
you
can
run
on
the
platform,
but
we
don't
know
much
more
about
them
other
than
any
metadata.
That's
already
on
the
images
so
concerns
like.
Where
did
it
come
from?
What
are
the
base
images
that
were
used
for
this?
What
kinds
of
dependencies
are
in
there
who
built
it?
E
How
would
one
go
about
building
these
images
aren't
really
baked
in
so
that's
kind
of
the
problem
space
that
I'm
approaching
with
my
image
ownership
work,
which
is
given
see
it
for
kate's
pulling
in
all
of
these
different
pieces.
Can
I
go
explore
out,
find
the
images
and
find
out
how
to
build
them.
E
So
pursuant
to
that,
I
created
up
a
little
proposal
document
that
has
just
kind
of
been
circulated
closely
around
some
concerned
parties,
but
I'm
trying
to
get
you
know
increasingly
available
to
people
so
I'll,
be
sending
that
a
link
out
to
that
to
the
cf
for
kate's
community
pretty
soon
to
gather
some
feedback
to
make
sure
that
we
have
all
the
voices
heard
of
people
that
have
opinions
to
contribute,
because
really
this
is
going
to
be
a
sort
of
community
effort
to
make
sure
that
everyone
feels
comfortable
about
the
solution
and
that
we
can
find
something
that
works
fairly
well
in
all
situations.
E
E
So
the
overview
here,
as
I
said,
is
we're
trying
to
prove
the
prominence
of
all
these
build
artifacts
and
the
primary
problem
from
the
platform
concern.
Is
we
want
to
make
sure
that
if
a
cve
or
some
sort
of
issue
with
the
underlying
base
images
comes
into
the
platform
that
we're
able
to
respond
quickly
and
issue
new
images
for
the
platform?
E
This
is
primarily
concerned
with
that
os
level
dependency.
The
direct
dependency
of
any
of
the
the
cf
project
teams.
Products
would
be
more
appropriately
handled
by
them,
and
so
communications
between
all
of
those
groups
would
be
where
we
would
go
and
then
issue
new
images
from
there.
E
The
cloud
native
build
packs
for
those
of
you
that
are
less
familiar
with
that
you
can
use
a
tool
called
pack
with
these
cloud
native,
build
packs
that
are
created
by
heroku
the
cloud,
foundry
foundation,
members
and
pretty
very
much.
Anyone
could
build
a
cnb
if
they
wanted
to,
but
they're
the
same
idea
of
a
build
pack.
Just
rebuilt
in
a
new
oci,
compliant
way
where
it
detects
everything
and
builds
it
for
you,
so
there.
Those
are
two
main
paths
that
we
want
to
be
concerned
with.
E
So
if
we
move
on
a
little
bit,
there's
sort
of
a
chronology
roadmap,
and
I
apologize
this
is
small.
Let
me
increase
the
size,
just
notice
that
here's
sort
of
a
chronology
roadmap
of
where
I
want
to
go
with
this.
So
if
you're
thinking
about
how
you
can
offer
your
feedback,
this
is
kind
of
where
I'm
thinking
the
various
steps
will
go.
E
After
that
we
have
to
have
something
around
source
discovery,
because,
as
I
mentioned,
the
contract
is
between
ciprocates
and
these
kubernetes
yaml
releases
or
templates
that
build
kubernetes,
yml
and
then
between
that
and
the
images.
So
we
need
a
way
to
actually
discover
where
all
of
the
source
code
lives.
If
we
want
to
rebuild
these
images
after
that,
which
are
more
the
once,
we've
got
the
ground
work
down,
the
thought
is
more
around
the
build
process
and
integration
so
making
it
standardized
and
really
easy
for
anyone
really
to
build
these
images
for
themselves.
E
If
you
wanted
to
move
the
base
image
to
something
else,
that
is
still
within
sort
of
the
realm
of
supported,
stuff,
pretty
similar,
but
maybe
customized
to
your
needs.
You
would
be
able
to
do
so,
so
that's
kind
of
where
we're
thinking
about
so,
if
I
kind
of
move
down
here,
this
is
rough
english
language.
Throughout
most
of
this
proposal
to
just
sort
of
like
not
get
too
much
into
the
implementation
details
which
can
be
fleshed
out,
but
we
have
sort
of
an
image
specification.
E
So
your
docker-based
images,
at
a
very
least,
we
kind
of
need
to
have
the
oci
image
spec
annotations,
so
your
standard
source
and
revision
that
are
becoming
a
lot
more
popular
across
basically
everywhere
that
uses
combined
images.
E
The
base
image
should
be
parameterized,
and
then
we
should
have
an
annotation
that
says
that
it
was
used
for
you
know
this
image
was
used
for
building
this
and
then
additionally,
I'd
like
to
have
a
component
annotation
label,
just
to
say
like
hey.
This
is
for
this
component,
so
we
can
associate
them
a
bit
more
on
the
pack
side
of
the
house.
E
E
E
So
with
all
that
said,
there
are
kind
of
two
main
pieces
of
specifying
some
of
this
that
I
kind
of
want
to
cover
which
would
be
source
discovery
proposal
a
and
source
discovery
proposal
b,
which
the
main
thrust
of
those
without
reading.
All
of
this
text
is
that
we
can
either
put
a
whole
bunch
of
this
metadata
in
sort
of
a
manifest
type
file
that
lives
inside
of
the
the
release
repositories,
which
would
be
where
all
this
kubernetes
yaml
lives.
E
So
you
would
have
all
the
ammo
that
reference
references,
the
images
and
then
you
would
have
a
file
that
basically
says
like
here's
all
the
metadata.
You
need
to
build
all
of
these
images
that
are
present
the
sort
of
proposal
b,
actually
sort
of
moves,
some
of
that
information
over
onto
the
actual
images
themselves.
E
E
So
you
would
sort
of
follow
the
chain
from
cf
for
kate's
to
the
individual
project,
to
a
file
that
said,
here's
the
images
and
it
would
fan
out
and
you'd
go
to
those
image,
references
to
inspect
them
to
see
how
they
were
built,
so
either
either
way
could
potentially
work
and
there's
probably
also
other
potentials
that
I'd
love
to
hear
about
from
people.
But
the
the
main
thrust
is.
I
would
like
to
be
able
to
just
follow
the
chain
to
the
image
reference
and
along
the
way,
collect
all
the
information.
E
I
apologize
for
that
loud
noise
that
maybe
everyone
heard
so
the
rest
of
this
is
sort
of
getting
into
the
build
process,
integration
and
then
a
few
exemplars
which
are
just
sort
of
little
examples
of
like
here.
You
would
see
a
docker
file
and
you
have
some
args.
You
have
the
sensible
defaults
and
then
you
have
some
labels
here.
E
E
I
kind
of
define
some
of
the
terms
here
just
running
through
when
you
get
your
hands
on
this
document
for
more
detailed
consideration.
This
is
just
kind
of
how
everything
looks.
I
went
into
some
of
the
considerations
and
build
requirements,
so
that's
the
main
thrust.
This
document,
like
I
said
I'll,
be
trying
to
get
it
out
to
anyone
that
wants
to
see
it.
E
So
basically
I
forked
everything
in
super
gates,
more
or
less
so
this
is
basically
cfrcats
with
an
additional
component
packages
directory
here,
but
it's
otherwise
pretty
much
the
same.
So
I
just
wanted
to
make
a
single
repo
so
that
I
could
explore
some
of
this
and
sort
of
get
a
demo-able
thing.
E
I
don't
have
any
of
the
the
machinery
around
automatically
building
any
of
these
yet,
but
I
just
kind
of
submoduled
everything
into
this
component
packages
directory
and
kind
of
illustrates
some
of
the
problem
that
hopefully
people
can
understand
the
journey
of
how
I
got
here.
So
if
we
take
cappy
kate's
release,
cappycat's
release
is
like
I
said
it's
a
full
project.
It's
a
thing
that
says:
hey,
I
have
all
of
these
pieces.
E
You
can
drop
this
onto
a
kubernetes
cluster
with
cf4kates
and
you
will
get
all
the
bits
that
they,
the
teams
behind
it,
have
been
working
on
to
make
the
product
work.
And
so,
if
you
look
at
this
there's
a
bunch
of
good
stuff
in
here,
but
I
don't
have
like
just
directly
here's
all
the
images
there's.
I
created
this
file
myself,
so
this
didn't
actually
exist
before,
but
so
I
had
like
to
look
in
here.
There's
like
a
docker
file.
E
Go
into
here
and,
like
you
know,
I
could
like
do
a
tree
to
try
to
find
doc
files
like
grab
for
docker
like
all
this
stuff,
and
I
can
like
try
to
find
them,
and
that
might
give
me
some
confidence
that
I
found
some
things.
But
it's
not
really
going
to
help
me
that
much
and
I
also
just
don't
know
where
they
are
so,
if
you've
ever
done,
like
a
docker
build
generally,
the
defaults
are
like
your
cd
into
the
directory
and
that's
your
context
and
you
assume
there's
a
file.
E
There
called
docker
file
at
the
root,
but
that
may
not
be
the
case
right.
It
might
be
named
foo.
It
might
be
in
a
completely
different
project.
It
might
be
in
different
directory.
The
context
might
be
the
root
of
the
repo,
but
the
doctor
files
in
here
so
they're
kind
of
scattered,
so
here's
one
there's
source
and
then
cf
api
controllers
and
then
there's
the
docker
file
here.
E
E
So
if
we
do
something
like
this
you'll
see,
there's
like
different
there's
several
different
images
here,
there's
not
just
like
the
three
docker
files,
so
it's
I'm
having
to
like
track
down
these
different
docker
files
or
images
if
they're
built
with
something
else,
so
you
can
see
without
going
into
too
much
detail
like
here
are
some
of
the
problems
with
just
trying
to
find
the
docker
file.
E
I
don't
want
anything
else,
so
I
started
tracking
down
all
of
the
images
and
cataloging
them,
and
so
here
is
kind
of
the
rough
super
rough
example
that
I
did
by
just
exploring
what
a
first
pass
that
this
images
manifest
would
look
like
just
say
like
here's,
the
project,
here's
where
you
find
it
then
here
is
a
an
array
of
images,
and
so
you
have
something
like
here's,
the
name
which
is
the
ref
not
fully
qualified,
just
the
the
name
that
it's
usually
referred
to
as
what
type
of
build
it's
going
to
be
then
like.
E
It's
actually
not
in
this
repo,
it's
in
a
separate
repo,
and
it's
not
submoduled
in
so
I
need
to
know
where
that
remote
source
is
because
it's
not
tied
another
way
other
than
you
might
be
able
to
inspect
the
image
and
find
that
on
there.
So
then
there's
the
sort
of
local
cell,
so
both
of
these
are
pack
generated,
but
this
would
be
the
local
source
is
in
source.
Cf
api
controllers
make
it
easy
this
one.
E
I
just
kind
of
put
a
comment
here,
which
is
it's
a
cloud
native,
build
pack
image,
so
we
would
probably
be
using
those
more
from
the
cnb
team
rather
than
rebuilding
them,
but
that
could
be
up
for
debate
there's
like
cappy,
which
is
a
dockerfile
local
anyway.
E
I
don't
need
to
get
too
far
into
the
weeds
on
most
of
this,
but
I
wanted
to
kind
of
point
out
that,
as
I
walked
through
these,
I
was
looking
at
like
me
doing
my
due
diligence
just
trying
to
reason
about
it
from
the
point
of
view
of
not
asking
the
team
and
just
digging
on
my
own,
and
so
for
things
like
these
two.
I
see
the
image
I
don't
know
where
the
source
is
like.
I
can't
even
raise
a
bit
about
where
it
is.
E
I
looked
at
the
images
and
they
don't
have
it
annotated
either.
So
I
would
be
unable
to
actually
build
these
images
at
this
point
in
time,
which
is
a
little
bit
of
an
an
issue
when
you're
trying
to
say
hey.
I
know
this
is
built
on.
You
know,
ubuntu
whatever
and
there's
a
cv.
I
want
to
push
out
new
images
immediately
so
trying
to
tighten
that
loop
so
that
we
can
avoid
bothering
some
of
the
leaf
nodes
about
concerns
that
are
really
more
core
to
the
platform.
E
So
hopefully,
that
kind
of
gives
you
an
idea
of
where
I'm
going
here.
It's
by
no
means
complete.
There
might
be
things
like
build
time,
arcs
that
you
would
want
to
include,
because
people
might
be
changing
something
like
the
base
image
or
some
of
the
images
you'll
see
have
like
a
the
git
sha.
E
That
is
wrapped
in
the
conditional
statement
to
like
fail.
If
you
don't
have
the
get
sha,
which
is
always
nice
but
things
things
that
you
kind
of
want
to
think
about,
are
these
additional
build
orgs
and
that
could
apply
to
both
cat
pack
and
docker
files.
E
So
that's
additional
metadata
that
might
go
in
here
or
it
might
go
on
the
image
through
the
the
dockerfile
labels.
E
So,
let's
see
if
I
can
pull
up
yeah,
here's
just
some
of
the
image
parameterization
arguments
all
that
stuff.
So
without
going
more
into
that,
I
kind
of
want
to
make
sure
that
with
my
time
I
leave
it
open
for
people
to
ask
some
questions.
This
isn't
just
let
me
talk
at
you.
This
is
also
another
first
pass
at
offering
some
thoughts
to
direct
where
I
go
from
here.
A
E
Currently,
no
the
the
current
flow-
and
I
guess
this
is
information
that
I
probably
should
have
gone
over
a
little
bit
beforehand.
The
current
way
these
images
are
built
is
that
the
individual
cf
project
teams
and
additional
projects
that
are
built
by
various
other
teams
and
community
members.
E
They
build
these
images
and
they
publish
them,
and
then
we
pull
them
in
and
at
that
point,
they're
pretty
much
used
directly
in
cfricate's
and
there's
there's
a
couple
images
that
we
pull
in
ourselves,
but
we're
mostly
just
allowing
them
to
flow
in
through
those
project
themes,
and
this
is
fairly
workable.
Most
of
the
time
but,
like
I
said,
we
would
like
to
be
able
to
have
a
little
bit
tighter
control
over
what
images
actually
contain
and
have
that
confidence
and
be
able
to
pass
that
confidence
on
to
the
community.
E
That
says
here
all
these
dependencies.
If
you're
concerned,
like
here's,
here's
all
the
information
and
part
of
the
way
we're
doing
that
is
the
newly
open
source
tool,
dependency
labeler.
So
we're
trying
to
put
all
that
information
on
the
image
and
one
of
the
things
we
need
for
that
is
the
actual
source
we
need
to
point
to
the
source
because
it
actually
scans
not
only
the
base
image
for
the
run,
but
it
also
scans
the
source
code
for
any
any.
E
Like
package
managers,
dependency
managers
to
actually
apply
all
those
labels,
so
yeah
that
that's
kind
of
the
flow
is
that
we're
just
taking
them
at
the
moment.
So
there's
no
big
giant
pipeline
to
be
able
to
reason
about
this.
That
would
have
to
kind
of
be
built
from
the
information
I'm
gathering
here,
because
the
idea
is
that
I
should
be
able
to
take
c
for
gates.
E
I
should
be
able
to
dive
into
the
the
projects
that
are
represented
in
this
vendor
yaml,
which,
if
I
haven't,
covered
vendor
email,
but
it's
it's
essentially
a
dependency
management
tool
for
kind
of
like
get
sub
modules,
but
a
little
bit
more
flexible,
no
less
get
sub
modules,
so
that
brings
in
the
product.
So
I
should
be
able
to
dive
into
any
of
this
information
and
then
track
down
the
images
from
those
projects
and
then
build
them.
E
So
in
an
ideal
world
release
integration
would
be
rebuilding
all
of
the
images
by
following
that
chain,
rather
than
just
having
arcane
knowledge
that
this
is
where
everything
is
and
just
having
a
pipeline
that
has
has
all
this
built
into
it.
It
should
be
more
discoverable,
a
little
more
programmatic,
so
that
everything
is
treated
the
same
rather
than
having
sort
of
bespoke
builds
for
each
one.
A
Super
valuable
for
regulated
industries
and
governments
and
militaries,
and
things
like
that,
being
able
to
trace
everything
the
provenance
of
everything
that
is
part
of
the
platform.
A
I'm
just
wondering
if
oh
go
ahead,
glad.
D
Yeah
thanks
will
we
be
able
to
build
the
docker
files
without
extra
tooling,
like
docker,
build
or
using
just
you
know,
regular
image,
building
tools.
E
The
idea
would
be
at
least
the
docker
based
images
you
would
be
able
to
build.
Just
with
your
plain
docker
build,
you
might
have
to
do
a
little
bit
more
work
to
parse
out
some
of
the
labels
that
might
contain
some
of
that
metadata,
which
you
know
might
be
a
json
stringified
blob
or
you
might
have
to
do
some
parsing
of
a
yaml
on
the
manifest,
but
for
the
docker
specific
builds.
E
D
E
If
you
want
that
the
dependency
label
information,
if
you
were
to
rebuild
the
image
yourself,
you
would
also
need
to
get
depth
lab
and
follow
that
process.
But
at
that
point
it's
also
open
source.
You
can
grab
it
and
that
would
depend
on
whether
you
want
to
add
that
information
to
the
image
that
you
build
yourself.
E
E
Probably
not
on
one
single
base
image,
but
within
a
family
of
base
images.
Yes,
the
the
general
thrust
right
now
is
to
use
the
cloud.
Foundry
images
base
images
for
build
and
run
for
basically
across
the
board.
They're
provided
by
the
pocato
build
packs
team
and
they
seem
to
be
a
pretty
solid
option,
especially
given
that
they
are
producing
the
cmds
cmd
images.
So
if
we
can
standardize
across
what
they
produce,
then
at
least
we
have
a
a
single
point
where
the
sort
of
os
base
considerations
come
from.
A
E
D
A
Just
to
make
sure
I
understood
that
correctly,
yes,
did
you
say
that
the
the
intention
is
for
most
components
to
use
the
same
base
images
as
are
used
in
the
cloud
native,
build
packs,
so
basically
the
same
images
that
apps
will
be
based
off
of.
E
They
produce
specific
base
images
that
are
used
for
like
docker
file
style,
so
they
are
probably
relatively
similar
in
what
they
provide.
But
I
don't
think
they're
identical.
I
would
need
to
go
inspect
them,
but
there
are
different
cnb,
build
and
run
images
for
the
different
builders
and
then
there's
the
images
that
are
meant
to
be
used
as
just
a
base
for
build
or
run.
A
Okay,
this
problem
you're,
trying
to
solve
that,
sounds
like
a
more
general
problem,
which
is
not
like
a
problem
for
cloud
foundry.
Only
so
one
might
ask
if
there
is
already
a
solution
to
this
out
there,
which
is
maybe
used
by
developed
by
some
other
vendor.
E
I
haven't
done
an
exhaustive
search.
I
didn't
see
anything
come
up
immediately.
There
were
various
tools
that
sort
of
had
little
pieces
of
it,
but.
A
E
That
was
sort
of
a
this
is
exactly
what
we
need,
because
most
most
projects
don't
have
exactly
the
same
concerns
that
we
have.
D
E
Image
it's
cool,
whereas
we're
treating
it
as
here's
a
like
fan
out
project
that
contains
a
bunch
of
stuff
that
then
feeds
in
fans
into
a
bunch
of
other
projects
just
like
that
into
cf
gates.
So
if
you
do
find
something,
if
you
know
of
something,
then
I
would
be
happy
to
look
at
it,
but
I
haven't
seen
anything
yet.
A
A
C
This
is
actually
has
something
like
that.
That's
the
the
open
build
service,
and
I
imagine
red
hat
probably
has
similar.
You
can
understand
why
an
os
company
would
have
designed
such
a
thing,
because
you
know
we're
we're
supporting
things
for
decades,
but
it's
more
package
oriented
than
it
has
been
container
oriented
in
the
past.
C
So
we
do
build
some
of
the
stuff
things
like
build
packs
and
such
in
obs,
but
not
not
everything,
but
I
I'm
not
sure
it's
the
it
is.
It
is
an
open
project
as
everything
is,
but
I'm
not
sure
it's
the
best
fit,
because
it's
rather
heavy
weight
with
all
of
the
legacy
stuff
that
it
comes
with,
but
it
does,
it
does
solve
the
problem.
C
You
know,
I
I
mean
think
about
the
problem:
you're
solving.
Yes,
this
is
an
os
vendor
who
passes
all
of
the
common
criteria,
certification
so
and
knows
exactly
what
the
bits.
So,
if
you
want
to
rebuild
the
same
bits
ten
years
later,
they
can
do
that
only
though
in
an
rpm
package
based
system
somewhat
in
some
of
the
container
stuff
and
we've
been
trying
to
improve
that.
But
so
it's
it's
not
an
exact
match
for
all
of
these
things,
but
it
does
cache
large
chunks
of
the
world.
E
Good
good
to
know,
I
I'm
always
happy
to
hear
about
some
of
these
tools.
I
think
that
is
definitely
part
of
it.
I
also
just
the
the
source
discovery
inside
the
template
projects
is
another
piece
that
may
or
may
not
be
necessary,
but
I
think
is
definitely
helpful
to
solve
the
other
way
dependency.
So
you
have
the
images
that
maybe
have
all
this
information.
You
can
rebuild
them,
but
they
don't.
E
B
I
hate
to
to
cut
the
discussion
short,
but
I
I
guess
we
need
to
give
lud
some
some
time
to
to
present
the
other
topic
for
today.
Sorry
for
that
andrew,
but
thank
you
very
much
for
for
presenting
I
I
guess
people
can
can
reach
reach
out
to
you
on
slack
or
via
comments
in
in
the
document
that
you're
about
to
send
out.
So
I
think.
E
You
can
find
me
in
the
cloud
foundry
slack
and
see
for
case
and
other
channels,
I'm
at
bird
rock,
like
b-I-r-d-r-o-c-k,
so
yeah
feel
free
to
ping
me,
okay,
thank
you.
D
I
have
just
a
couple
of
slides:
yeah,
hey
everyone
just
wanted
to
give
a
quick
update
on
cube
cf.
We
have
2.5
out
of
the
2.5
series,
I'm
going
to
discuss
what's
new
on
the
diego
side
of
things
and
what's
new
on
the
irene
side
of
things
and
then
open
it
up
for
for
discussion.
If,
if
there
are
questions
so
on
the
diego
side,
we
are
basing
the
release
on
cf
deployment
13.17.,
I'm
not
sure.
D
If
we
went
beyond
that
for
cvs
and
and
things
like
that,
but
the
last
a
check
it
was
based
on
that
version.
We
have
multi-cluster
support
if
you
need
to
go
beyond
one
kubernetes
cluster,
for
either
isolation
or
size.
D
D
We
have
support
for
tolerations
for
easy
configuration
of
tolerations
you
could
you
could
configure
those
with
2.4
in
previous
releases
as
well,
but
it
wasn't
easy.
This
makes
it
easier.
We
also
by
default,
assign
local
storage
to
diego
cells
from
the
nodes.
So
you
can
imagine
that
if
you're
deploying
diego
with
cube
cf
on
a
kubernetes
cluster,
we
we
recommend
that
you
run
one
diego
cell
per
node,
so
you
tie
one
cell
to
one
kubernetes
node
through
through
tolerations
and
obtains
and
then
you're,
essentially
using
that
whole
node
to
run
cloud.
D
Foundry,
apps
and
you're
sharing
the
storage
of
that
node
with
with
the
diego
cell
and
that
leads
to
better
performance,
more
stability
for
applications
and
so
on.
D
So
that's
mainly
for
mainly
it
for
diego
and
then
for
irene,
we're
focusing
we
focused
on
delivering
cats
so
having
a
baseline,
where
cats
are
green
with
irony,
the
2.5
release
so
we're
almost
there
with
cats.
The
the
next
release
of
cube
cf
will
have
a
particular
set
of
cats
that
are
green,
that
we
define
as
the
as
the
baseline.
There
are
two
or
three
failing
cats
at
the
moment
and
they're
intermittent.
If
I
understand
correctly
based
on
the
problem,
there
is
log
output,
some
assertions
being
made
on
log
output.
D
D
D
D
Irini
x
features
enabled
ssh
persistent
support
logging
without
fluent
d
and
app
dns,
which
is
a
new
irene
x
component
that
we've
added
that
allows
applications
deployed
by
irini
to
have
access
to
the
internal
application
dns
for
bosch
components
so
that
ireniac
irini
apps
can
talk
to
things
like
credithub
or
or
internal
uaa.
D
Host
names,
for
example,
so
there's
also
a
table
here
if
you're
interested
on
which
cast
suites
are
enabled
and
which
are
still
disabled
and
yeah.
Once
we
have
this
green
baseline
for
for
cats
will
continue
improving
on
it
and
adding
more
of
these,
as
as
cube
cf
releases
pop
up
and
yeah,
I
think
that's
it
for
my
update
any
questions.
B
E
And
I'd,
like
I
said,
I'd,
be
happy
to
hear
about
anything
anyone
else
knows
about,
because
my
well,
I
did
search
around.
It
was
not
exhaustive.
I
just
didn't
find
anything
that
really
directly
fit
the
bill.
B
I
think,
while
you
were
presenting,
I
I
was
trying
to
to
kind
of
recollect
the
things
that
you
want
to
achieve
with
your
approach
and
I
think
there's
like
a
couple
of
things
that
you
want
to
do
here,
like
discoverability,
rebuild
ability,
etc,
etc.
So
I'm
also
not
sure
that
there's
a
solution
for
for
all
of
these
out
there
in
the
open
source.
But
I
I
wanted
to
give
it
some
more
thought
before
like
saying
that
there
is
something
or
there
isn't
something.
E
Absolutely
and
part
of
the
present
idea
is
also
to
put
some
of
the
or
locate
the
data
where
it
really
belongs
right.
Release
integration
shouldn't
be
trained
to
maintain
a
knowledge
base
of
how
every
single
image
in
the
platform
is
built.
So
we
need
to
sort
of
put
that
image
information
either
directly
on
the
images
or
in
the
projects
that
bring
them
in.
E
So
that
was
some
of
the
impetus
to
have
sort
of
like
a
manifest
file
that
lives
in
the
project
so
that
the
people
that
are
building
these
images
originally
actually
sort
of
maintain
the
like
here's.
The
instruction
set
the
programmatic
instruction
set
on
how
to
build
the
images
rather
than
trying
to
have
one
team
maintain
all
that,
because
that
could
easily
create
a
little
bit
of
drift
right
if
the
shape
of
those
changes.
Different
arguments
come
in
the
directory
structure,
changes
stuff
like
that.
A
E
B
Okay,
if
there
are
no
further
questions
to
andrew
and
also
no
further
questions
to
to
vlad,
I
think
maybe
last
kind
of
organizational
topic
beyond
like
voting
for
topics
for
for
this
meeting.
I
also
set
up
a
voting
for
the
question
of
like.
Should
we
keep
the
the
meeting
frequency
every
two
weeks
or
should
we
kind
of
change
something
regarding
the
meeting
frequency
last
time
I
checked,
I
think
pretty
much.
B
All
people
were
in
favor
of
keeping
the
meeting
frequency,
as
is
which
I
guess,
given
that
our
topic
list
is
now
empty.
With
these
two
presentations,
I
definitely
rely
on
people
to
either
submit
like
requests
for
for
topics
to
be
presented
or
even
more
ideal.
Teams
that
are
stepping
up
to
proactively
suggest
topics
that
they
are
willing
to
present.
B
So,
having
said
that,
please
follow
up
and
put
put
topics
into
the
world
that
I'm
I'm
going
to
to
put
on
on
slack
and
questions
with
around
any
last
minute
topics.
Anything
that
you
want
to
discuss,
announce.
B
Because,
if
not,
then
I
think
we
can
give
everybody
15
minutes
back
to
prepare
for
the
next
meetings
or
the
evening,
depending
on
where
you
are
okay,
thanks
everybody
and
talk
to
you
next
time.