►
From YouTube: Cloud Foundry Community Advisory Board [March 2020]
Description
Agenda can be found here:
https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit
A
Hello
welcome
everyone
to
this
month's
Cloud
Foundry
community
advisory
board
cab
meeting
a
the
18th
of
March
I
hope
everyone
is
safe
and
healthy
and
has
everything
you
need
as
we
hunker
down
for
Koh
mid-nineteen,
let's
get
started,
I
just
blew
my
surprise
there
again.
Let's
start
with
Swarna
who's
gonna
tell
us
a
little
bit
about
what
the
Cloud
Foundry
Foundation
has
going
right
now,
I'm.
B
Sure
ship
is
also
here
so
I
will,
let
him
add
anything
you
would
like
to
add,
but
we
just
wanted
to
make
everyone
aware
that
we
understand
these
are
truly
like
when
we
say
these
are
uncharted
territories
and
unchartered
waters.
These
are
literally
those.
So
we
are
at
least
as
of
now
we
are
continuing
as
planned
for
a
Cloud
Foundry
summit
in
Austin.
However,
we
also
realize
that
the
City
of
Austin
has
a
lot
of
restrictions,
at
least
until
for
the
next
five
or
six
weeks,
which
still
puts
us
out
of
that
window.
B
But
we
will
be
monitoring
the
situation
and
making
updates
on
that
page.
That
I
added
a
link
to
the
notes
here
in
the
notes
here,
but
also
we
will
send
out
notifications
to
the
community
as
and
when
things
as
and
when
we
get
to
no
more
updates
or
things
as
well.
So
please
do
stay
tuned.
Please
do
take
a
look
at
the
emails
whenever
we
send
out,
but
in
the
meantime
we
understand
again
things.
A
lot
of
things
are
going
and
it's
really
hard
for
us
to
focus
as
well.
B
So
we
extended
the
deadline
until
the
3rd
of
April.
It
was
supposed
to
end
this
week.
I
think
this
weekend,
but
yeah
we
accelerated
until
the
April
3rd,
so
that
folks
will
have
a
lot
more
time
to
think
about
all
your
abstracts
and
proposals
and
submissions.
If
you
have
any
kind
of
questions
or
even
concerns
with
anything
related
to
summit,
please
do
not
hesitate
either
oppose
the
question
on
this
summit.
Slack
channel
or
drop
us
an
out
either
dead,
Giles
who's,
our
events,
late
or
any
of
us
a
foundation.
B
C
I'm
sorry
about
I
want
to
add
a
couple
things
tree
me
too,
maybe
reinforce
what
Sarna
was
saying
so
first
there
was
a
question
we
hold
the
and
ask
me
anything
call
for
our
member
contacts,
and
there
was
a
great
question
that
was
asked
yesterday.
You
know
what
what's
the
risk
of
canceling
right,
the
the
choice
that
we
made
to
attach
a
CF
summit
to
the
open-source
summit
in
North,
America
and
then
similarly
for
later
in
the
year
and
in
in
Dublin,
means
that
our
organizationally,
we
have
a
very,
very
limited
financial
exposure
there.
C
So
we're
not
we're
not
approaching
this
as
a
financial
question
at
all.
We
are
today
relying
on
the
Linux
Foundation
to
make
choices
for
kind
of
a
larger
event
that
we're
attached
to
they
have
an
epidemiologist
X
CDC
person,
who's
consulting
with
them,
in
addition
to
work
very
very
closely
with
the
with
both
the
venue,
as
well
as
the
the
City
of
Austin,
to
make
calls
on
this.
C
So
that's
that's
some
of
the
context
about
the
event
right,
like
we're,
gonna,
think
about
people.
Community
safety-
and
you
know
if
it
seems
like
it-
is
safe.
Maybe
people
are
gonna
want
to
get
out
right,
so
we're
trying
to
just
play
that
loose
right
now,
however,
think
about
submitting
talks,
even
though
there's
there's
a
likelihood
that
we're
not
going
to
hold
an
in-person
event,
because
we
are
absolutely
going
to,
if
necessary,
pivot
to
some
type
of
virtual
format
and
there's
a
ton
of
amazing
technical
content.
C
C
So
your
your
talk,
submission
ideas
are
just
as
valuable
irregardless
of
the
format
that
we
end
up
getting
the
community
together
in
so
don't
don't
let
you
know
possibility
of
delay
delay
your
own
efforts
to
you
know,
get
a
good
talk,
submission
pull
together.
Does
that
make
sense
to
anybody
and
are
there
any
questions
about?
You
know
what
we're
doing
about
that.
A
That's
great
chip,
one
of
the
things
I
value
most
about
the
CF
summit
is
the
contributor
summit.
So
maybe
we
could
organize
also
if
we
end
up
going
virtual,
some
some
topic
specific
contributor
meetings
just
to
get
this
the
lay
of
the
land
as
well,
so
not
just
hasn't
patience
but
actual
discussions.
Yeah.
C
C
A
Cool
thanks
very
much
Anna
thanks
very
much
for
extending
that
deadline.
I
know,
I
was
swamped
with
other
things
and
I'm
I'm
grateful
for
a
little
bit
more
time
to
prepare
stuff,
if
that's
all
from
the
foundation
and
let's
hear
from
the
PMC
thanks
Eric
for
getting
some
notes
in
it.
If
somebody
wants
to
share
the
actual
agenda
document
to
add
things
to
or
if
it's
changed
since
I've
taken
the
snapshot
but
I'll
hand
it
over
to
you
Eric
for
runtime
PMC
yeah.
D
Thanks
Trey
happy
to
cover
a
few
highlights
over
the
last
month
or
so
I
think
one
of
the
major
ones
is.
We
accepted
the
incubation
proposal
for
coop
CF
and
that
team
is
now
released
version
1.0.
So
congratulations
to
all
of
you
and
you
know,
give
it
a
spin
looks
like
it's
got
a
lot
of
great
capabilities.
I.
Think
it's
a
great
solution
right
now
for
getting
CF
running
on
topic.
Aids
among
the
other
teams.
I
know
relevant,
is
doing
some
planning
for
the
next
major
version
of
CF
deployment
and
they
have
a
few
notes.
D
I.
Think
of
the
PMC
notes,
if
not
on
the
mailing
list,
yet
about
some
of
the
changes
that
are
upcoming
there
and
then
they're
also
focusing
a
lot
of
energy
on
CF
frigates
and
integrating
a
lot
of
the
other
component
teams.
Changes
into
that
project
and
Cappy
in
particular,
has
been
working
with
relevant
to
get
their
integration
with
kpac
for
cloud
native,
build
pack
staging
into
that
to
hit
a
few
communication
snags,
but
otherwise
is
proceeding
pretty
smoothly
and
we're
all
excited
to
get
that
first.
D
Integration
of
collecting
to
build
packs
into
this
other
kate's
focused
distribution
of
CF
uaa,
is
also
continuing
to
advance
with
their
kubernetes
packaging.
A
few
AAA
and
I
think
they're
right
now
focused
on
refining
some
of
the
secret
management
in
terms
of
the
configuration
so
beyond
doing
that.
Initial
lift
of
some
of
the
BAS
roley's
properties
into
kubernetes
configuration
network
is
also
integrating
in
CCF
rockets
and
they've,
been
making
more
progress
in
terms
of
offloading
some
of
the
network-wide
responsibilities
to
sto
and
at
sidecars
and
so
I
shan't
even
recommending
to
you.
D
The
other
component
team
is
that
they
started
switching
back
to
plain
HTTP
communication
as
one
mode
so
that
they
can
let
the
sto
service
measure
take
over
transparent
mutual
TLS
on
the
container
network
for
those
components
and
then
we're
also
getting
more
telemetry
integrated
into
CF
frigates
alligator's,
proceeding
with
some
of
their
plan.
Changes
for
the
log
transport,
consolidation
components,
mcf
frigates
and
so
we're
getting
a
flogs
plumbed
out
and
they're,
also
working
on
container
metrics
getting
those
exposed.
D
And,
last
but
not
least,
diego,
is
doing
some
pretty
deep
evaluation
of
a
really
fascinating
PR
from
the
community.
That
is
tweaking
how
the
auctioneer
does
placement
of
containers
to
more
heavily
weight
or
favor
Diego
cells
that
have
a
lower
Bosch
index
so
that
the
company
that
has
contributed
this
has
found
that
they're
able
to
get
some
significant
space
saving
by
trying
to
fit
containers
on
those
lower
index
cells,
and
so
that
leaves
the
higher
index
cells
more
open
for
bigger
workloads,
and
then
they
can
scale
those
down
a
little
bit.
D
So
again,
this
is
it's
going
to
be
an
optional
change
to
Diego,
but
assuming
that
it
works
out
really
well,
it
could
be
a
very
interesting
optimisation
for
that
kind
of
placement,
especially
for
operators
that
are
managing
a
wide
range
of
container
sizes
anyway.
I
think
those
are
some
of
the
major
highlights
for
the
past
month
and
I.
Think
I
see
some
updates
for
the
extensions
PMC
coming
into
the
Google
Doc
right
now,.
E
E
So,
thank
you.
Thank
you
Troy.
So
the
update
is
a
easy
update.
Is
that
I
don't
have
any
update,
and
maybe
it's
because
last
month,
I
had
some
personal
time
off
the
passing
of
family
member,
but
that
mean
that
meant
that
I
took
a
couple
weeks
off
and
I
had
to
cancel
our
much
fee
meeting.
But
the
meeting
this
month
is
coming
up
next
week,
so
it's
always
scheduled
at
the
end
of
the
month,
so
that
I
can
give
you
the
update
for
the
next
month.
E
A
Okay
thanks,
so
that's
that's
all
we
got
from
project
highlights
this
month.
Let's
move
on
to
something
I'm
super
excited
about,
which
is
the
CF
4k
eights
demo,
so
those
of
us
have
been
in
the
kubernetes
sync
meeting
know
all
about
this
and
I've
just
written
a
blog
post,
which
hopefully
will
come
out
soon
sort
of
describing
the
the
different
tracks
were
on
as
a
community
to
deliver
not
only
kubernetes
native
components
but
also
kubernetes.
A
F
Awesome
so,
hopefully
all
can
see
my
screen.
I
I
created
a
small
deck
to
give
more
structure
to
the
demo.
So
one
recommendation
I
have
is
to
ask
questions
as
you
go
through
the
demo.
Alright,
the
soom
chatroom,
once
I,
walk
through
the
demo.
I'll
answer
the
questions
as
long
as
I
have
time,
if
not
I'll
copy/paste
the
questions
from
chat
into
a
word
doc
and
a
nonce
all
answer
the
unanswered
questions
and
shared
that
widely
in
this
year
for
Kay's
channel,
so
awesome
alright.
So
we
want
to
put
on
present
mode.
F
So
we
we
started
the
journey
around
November
of
last
year
when
teams
reached
out
to
us
for
help
with
their
integration
efforts
for
four
cf1
gates,
as
they
were,
building
capabilities
on
Kate's
components.
We
were
using
some
hybrid
Bosh
and
Kate's
deployments,
and
it
got
increasingly
difficult
for
them
to
build
and
ship
their
components.
After
our
research
we
had
an
inception
in
December
and
on
identifying
one
of
the
urgent
versus
non
urgent
needs.
We
submitted
a
proposal
in
January
2019.
We
got
some
great
feedback
and
we
had
our
first
repository
created
in
Feb
2009
teen.
F
The
first
commit,
if
you
may,
and
we
released
the
first
release,
that
we
had
some
for
users
to
try
because
CF
push
darker
app
only
and
here
here
we
are
in
March,
with
with
CF
push
bill
paxton
mo.
So
it
is
phenomenal
to
see
the
progress
we
made
in
just
two
to
three
months,
so
I'm
pretty
proud
of
proud
to
be
part
of
this
journey
and
and
bring
you
all
this
demo.
F
C
F
D
F
That's
really
fine
I'll
use
that
excuse
2008
2020,
we
go,
we
go
back
in
time,
all
right,
cool,
so
I'm
going
to
the
next
slide.
So
what
is
c4
Cades?
Was
this
made
of
so
I
just
thought?
Maybe
I'll
give
a
lay
of
the
land
here.
Here's
sort
of
a
high
level
hundred
the
hood
for
safer
gates,
the
the
green
boxes,
designate,
the
coercive
components
and
the
yellow
box
boxes
designate
the
dependencies
all
right.
So
the
the
are
two
sort
of
namespaces.
F
If
you
may,
one
is
the
app
namespace,
which
is
where
the
CF
apps
are
deployed
and
the
system
namespace
generally
contains
the
Cloud,
Foundry
components
and
the
dependencies
few
notable
things
to
I
would
like
to
highlight.
We
are
using
k
pack
to
sage
apps,
as
Eric
mentioned
in
the
previous
and
and
in
the
the
updates
that
are
using
k
pack,
which
uses
utilizes
native
cloud
native
bill
packs
to
build
images
in
a
consistent
and
reproducible
way.
F
F
We
are
using
is
tio
to
manage,
inter
component
communications,
security
policy
enforcement
and
so
on
and
from
a
lifecycle
standpoint
of
managing
CF
itself.
We
are
using
cap
to
manage
the
Cloud
Foundry
lifecycle.
The
best
analog
is
these:
are
the
boss
CLI
deploy
command
that
we
used
to
doing
and
we're
using
ytt
for
our
for
our
templating
needs?
F
F
A
F
So
I'll
jump
into
a
demo,
some
disclaimer,
the
I'm,
using
a
branch
that
is
yet
to
be
merged
to
master,
which
is
very
close.
We
are
updating
documentation,
adding
some
known
issues,
so
folks
are
aware
when,
when
we
actually
do
merge,
it
I
expect
this
to
be
merged
in
the
next
day
or
two,
but
for
this
demo,
I'm
just
playing
off
of
that
bright,
the
branch-
and
so
it's
a
it's
gonna,
be
a
wild
west.
So,
let's
see
how
it
goes
awesome.
So
I'll
switch
to
my.
F
I
turn
see
my
yep
awesome,
so
I
got
just
to
where,
if
I
have
Q
cuddle
cat
namespaces,
just
to
show
you
that
this
is
a
Q
Kate
install
that
has
no
CF
on
it
and
then
on
here.
I
am
in
CFO
gates,
as
I
mentioned,
I'm
using
I'm
using
the
branch
right
now,
so
I'm
gonna
go
ahead
and
run
through
steps.
So
a
few
things
I've
written
down
the
steps
here,
which
is
very
similar
to
the
steps
that
are
on
our
documentation
on
the
repository.
So
the
first
one
is
the
domain
name.
F
So
this
is
your
CF
system
domain,
be
probably
only
aware
of
that.
So
I'm
gonna
go
ahead
and
apply
that,
and
then
I
mentioned
to
you
all.
The
the
kpac
requires
a
registry
to
go
to
to
host
the
CF
apps
so
I'm
using
the
Google
container
registry.
For
this
demo
you
could
use
any
publicly
known
resister
eat
like
docker
hub
comm.
F
If
you
want
to
but
I'm
using
that
great,
so
we
have
the
script
called
a
hack
generate
what
this
does
really
is
generate
an
install
file
and
what
this
is
doing
is
taking
in
those
parameters
and
creating
an
install
yeah
mole
with
values.
Again,
this
should
be
in
the
documentation
so
I'm
going
to
create
a
brand
new.
F
F
It
uses
bash
CLI
and
that's
the
only
dependency
that
we
have
for
bosch
CLI,
because
Pasha's
is
a
nice
little
tool
to
generate
search,
but
you
can
use
other
tools
if
you
want,
if
you
are,
if
you
don't
want
to
use
power,
CLI
great
and
then
I
think
I've
made
a
check,
so
I'm
gonna
go
ahead
and
install
the
sweep,
so
all
I'm
gonna
do
is
15
click
on
it
and
then
I'm
gonna
put
a
watch
on
this
to
see
how
how
the
pots
are
created.
So
so
there
are
no
resources
found.
F
F
This
will
take
few
minutes
like
potentially
see
get
cap.
Thank
you.
Sorry.
It
the
wait,
I
believe
it
installs.
It
starts
with
the
cluster
level
resources.
So,
like
you
see
it's
installing
role,
bindings
and
namespaces,
and
then
it
starts
creating
the
actual
CRTs
the
next
up
and
then,
after
that
it
says
creating
paths
and
so
on
and
so
forth.
So
there
are
certain
certain
order
it
tries
to
do
and
you
can
also
actually
add
conditional
ordering
if,
in
cap
through
annotations,
that's
the
if
there
are
any
specific
use
cases
of
ordering
that
you
may
have.
A
F
So,
yes,
we
are
at
this
stage
right
now.
We
are
not
adding
any
specific
ordering
other
than
I.
Believe
cap
has
implicit
ordering
of
CR
DS
first,
but
you
could
potentially
do
that.
For
example,
you
could
have
the
UA
be
the
first
one,
and
then
cap
e
be
the
second,
because
Cappy
depends
on
you
a
you.
Could
you
could
actually
order
that
that
way?
Ideally,
we
don't
want
to
do
that
at
least
and
see
if
in
case,
what
it
is,
it's
you
know
things
eventually
converge
to
working
so,
but
it
is,
it
is
a
possibility.
F
F
A
And
speaking
of
someone,
who's
used
qcf
a
lot.
These
crash
loop
backups
are
kind
of
normal
as
something
crashes
to
consensus,
and
you
know
this
is
where
the
ordering
can
help,
but
shouldn't
be
relied
on,
because
every
component
that's
launched
to
kubernetes
should
be
able
to
recover
and
connect
to
the
other
components
about
this
right.
F
Yeah
I
think
maybe
this
is
some
some
of
those
efficiencies
we
can
gain
about
talking
to
the
component
teams.
What
are
your
init
requirements?
What
are
you
looking
at
and
so
on
and
so
forth?
Okay,
so
cap
is
finished,
so
we
can
see
that
it
has
installed
all
of
the
components.
We
should
have
now
two
names,
multiple
namespaces,
but
it
should
be
one
thing:
I
want
to
call
out
as
I
see
it
workload
names
as
I
mentioned.
We
have
CF
system,
but
we
also
have
these
tio
system
as
a
separate
namespace.
F
F
So
we
using
gke
right
now
as
for
our
cube
analysis,
it
needs,
but
we're
planning
to
think
more
around
using
other
distros
as
well.
So
here's
my
sort
of
domain
address
and
I'm
gonna
go
ahead
and
paste
that
so
I
could
use
that
to
connect
to
CFC
Li
and
then
I
can
just
watch
and
see
it
propagate.
So
give
it
a
few
seconds.
A
F
E
F
E
F
F
F
F
F
Could
yes
ability?
So,
yes,
you
can
use
capo.
You
can
directly
use
cube
kernel
to
apply
to
get
the
logs
from
that
pod.
With
cap,
you
can
do
something
like
wildcard
characters,
so
you
don't
have
to
know
the
pod
name.
You
can
just
simply
say
Cappy
star
and
follow
and
it
will
get
all
the
logs
from
burst
through
all
of
those
pods
into
one
stream.
F
F
A
F
E
E
F
This
yeah
so
pinpin
that
question
of
once
again,
let
me
answer
the
first
one,
which
is:
where
are
we
and
how
long
towards
1.0
I
think
that's
the
question
right?
The
first
one
yeah
yep.
So
just
a
quick
note
on
that.
So
what
is
the
integration
set
up
today
and
what
is
the
contribution
contribution
flow
workload?
Look
like
so
on
the
Left.
What
you
see
is
is
it's
it's,
it's
the
workflow
that
we
are
using
today
for
the
the
core
component
teams
to
contribute
to
CF
or
Gades.
F
So
the
contributing
teams
are
using
CF
for
Cades
artifact,
both
in
their
local
and
and
and
there
CI
environments
once
they
are
ready.
They
build
the
docker
images
and
submit
a
PR
to
CF
or
gates
repository,
which
can
contain
config
changes
and
image
digests
changes,
and
then
we
run
through
our
own
integration
and
if
it's
successful,
we
merge
it
to
master.
So
if
you
pull
from
master,
you
should
have
a
stable,
safe,
okay,
it's
install
and
a
high-level
peek
into
on
the
right
how
our
integration
setup
is
in
general,
so
we
are
at
the
moment.
F
We
are
testing
this
on
gke
only,
but
we
plan
to
expand
that
in
the
future,
to
one
hour
to
more
case
distribution
and
perhaps
a
KS
or
g
KS.
Up
until
now,
we
were
just
using
one
test
which
is
CF
push
docker
app.
But
again
we
we
plan
to
change
that
as
we
incorporate
new
capabilities,
especially
with
new
bill
packs.
We
may
start
using
the
smoke
test
that
we
package
to
it
as
a
way
to
validate
the
new
changes
and
then
move
to
cats
as
new,
capable
Pig
capabilities
get
at
it
and
then
looking
forward.
F
So
this
answers
the
question
towards
oops
looking
forward
in
terms
of
1.0,
so
first,
what
you're
seeing
here
is
the
release
integration
roadmap.
The
component
teams
have
their
own
roadmaps
and
timelines
for
CF
capabilities,
so,
as
in
component
teams
make
them
available,
we
will
integrate
and
ship
them
throughout
the
tour
throughout
this
journey.
So
what
you
see
here
is
not
exhaustive,
but
it's
very
much
focused
around
all
these
integrations
sort
of
roadmap,
if
you
may
so,
you
saw
talked
about
the
off
coming
off.
F
The
the
another
big
sort
of
initiative
we
want
to
build
is
the
contribution
model,
so
we
can
consume
component
releases.
So
this
is
where
we
want
to
consume
their
releases
from
from
contributing
teams,
integrate,
validate
and
ship
very
similar
to
what
do
you?
What
do
you
expect
from
CFD
safe
deployment
and
then
the
future
is
where
we
want
to
go
more
towards
operator
concerns.
For
example,
what
are
their
house
upgrade?
Look
like
for
n
minus
1
to
n.
How
do
we
package
smoke
tests
so
that
they
can
validate
these
upgrades?
F
Well,
what
would
the
world
look
like
with
setting
up
ingress
certificates?
How
do
we
rotate
those
and
then
so
on
and
so
forth?
So
so
we
are
not
there
yet,
but
that's
our
our
hope
is
that,
as
we
march
towards
the
in
the
next
coming
months,
we
will
address
some
of
the
enterprise
readiness
capabilities.
F
F
Yeah
so
I'm
gonna
move
to
the
next.
So
how
do
you
get
started?
It's
available
at
this
depository,
their
docks
to
deploy
to
a
hosted,
Cades
cluster
and
also
to
deploy
to
a
local
cluster?
So
please
go
take
a
moment.
Try
it
out
love
to
get
your
feedback.
As
I
mentioned,
you
can
reach
out
to
cf4
gaze
channel
with
bugs
or
feedback.
You
can
submit
p
issues
at
PRS
to
the
CF
four
gates
repository
or
you
can
even
sign
up
for
direct
feedback,
a
plan
to
send
out
a
Google
Form.
F
If
you
all
interested
in
crime
of
providing
your
feedback,
you
know,
especially
if
you
have
kids
you're
using
Kate's
and
capacity
I,
would
love
to
hear
your
feedback
on
how
you're
thinking
about
running
CF
on
that
yeah.
So
and
that's
about
it
and
I
can
take
questions
and
one
question
was
about
Cube
CF.
Can
you
repeat
that
question
again
sure.
E
A
Totally
want
to
take
a
swipe
at
it,
because
this
subject,
it's
the
partial
subject
of
the
blog
post
I
wrote
about
the
incubation
of
cube
CF,
which
will
come
out
pretty
soon
and
it's
it's
something.
We've
been
talking
a
lot
about
in
CF,
fer
fer
case
where
the
the
kubernetes
sig
meetings,
I
should
say.
So.
A
When
we
met
in
Philadelphia,
we
had
conversations
about
because,
of
course,
Sousa
and
IBM,
and
s
AP
have
been
working
on
sort
of
containerized
cloud
foundry
in
a
different
stream
for
a
while
about
how
to
get
the
optimal
get
to
the
optimal
goal
of
a
really
truly,
not
only
kubernetes
native,
but
a
kubernetes
idiomatic
place
for
cloud
foundry,
so
that
cloud
foundry
is
seen
as
a
necessary
part
of
a
kubernetes
cluster
for
for
making
it
into
a
platform.
And-
and
so
we
had
two
different
ways
of
going
about.
A
This
one
was
that
the
component
teams,
upstream
that
are
making
the
parts
of
Cloud
Foundry,
have
to
understand
kubernetes
deeply
and
have
to
make
kubernetes
native
components
from
the
start
and
and
make
them
what
we're
starting
to
call
them.
Kubernetes
idiomatic
using
the
parts
of
kubernetes
that
are
really
good
at
delivering
cloud
native
applications.
A
Parallel
to
this,
especially
for
souza,
because
we've
got
a
distribution
that
is
running
on
on
on
kubernetes.
That
is
based
on
bosch
releases.
We
have
to
have
continuity
with
the
extensive
Bosch
community,
so
our
team
has
been
the
QCF
team
has
been
pursuing
this
way
of
consuming
the
Bosch
releases
that
relevant
already
puts
out
and
combining
them
into
a
certifiable
foundation.
Certifiable
release
that
you
can
actually
run
in
production.
A
A
What
like,
what
you're,
seeing
with
with
CF
for
k-8
on
the
other
side,
you're
going
to
see
CF
for
K
eights,
complete
more
of
the
functionality
that
you
would
see
out
of
the
box
right
now
in
cube
CF,
so
we're
both
going
to
the
same
place,
which
is
a
distribution
of
Cloud
Foundry
on
communities
or
maybe
two
distributions
of
Cloud
Foundry
for
community
might
have
some
slight
differences
and
how
they
are
deployed.
But
the
components
will
be
the
same,
so
you
might
always
install
cube
CF
with
helm.
A
F
A
It
also
ships
Diego
as
an
optional
thing,
which
I
don't
think
SIA
4k
its
ever.
Will
we
ship,
alternatively
ireenie,
but
that
we're
pulling
in
the
same
ireenie
code
and
ultimately
will
pull
in
the
same
stagers
and
the
same
version
of
Kathy
and
the
kubernetes
native
version
of
UAA?
So
the
architectures
will
converge,
Ament
not
be
exactly
the
same,
but
they
should
both
be
certifiable.
They
should
both
be
from
an
end
users
perspective.
An
identical
CF
experience
is.
E
G
F
F
Yeah
so
logging
in
log
logging
in
general
B,
we
are
working
with
the
logging
and
team
right
now.
So
last
I
checked
we
were.
The
logging
was
not
working.
Did
you
some
Lokesh
complexities,
but
really
hope
to
once
the
CF
bill
pack
apps
go
in?
We
hope
to
have
the
logs
focus
on
the
logs
and
see
ways
we
can
resolve
those
any
issues
any
pending
issues.
It.
F
F
D
D
D
F
E
F
It's
to
be
determined,
yet
we
have
to
see
how
we
have
to
talk
to
the
teams
and
how
and
understand,
as
I
mentioned
we're
looking
towards
the
contribution
of
workflow,
and
during
that
workflow
we
will.
We
will
try
to
figure
out
how
how
that,
how
will
the
releases
be
consumed,
whether
and
whether
they'll
be
consumed
by
both
CFD
or
CF
or
case
or
both
so
so,
I
don't
have
the
answer
that
to
that
question,
you
have
okay.
E
A
F
I,
just
I
just
had
the
privilege
to
show
it.
It
was
really
the
team
work
and
the
community
and
all
the
contributing
teams
just
been
in
phenomenal
in
terms
of
trying
to
get
this
working
with
one
motive.
So
I've
never
like
I,
said
before
pretty
proud
of
this
journey,
with
the
singular
focus
of
getting
this
working.
You
know.
A
A
Please
take
care
of
those
people,
you
love
and
it's
it's
gonna,
be
a
rocky
ride
with
kovat
19
do
your
best
to
you,
know
self,
isolate
and
stay
safe
and
while
we're
all
working
with
each
other
online,
please
remember
to
be
kind,
because
that's
one
of
the
best
best
things
we
can
do
for
one
another
right
now
with
that
heavy
thought.
Thanks
all
very
much
for
joining.
Thank.