►
From YouTube: Cloud Foundry Community Advisory Board [Jan. 2020]
Description
Community discussions include:
1. KubeCF by Thulio Assis (f0rmiga), Troy Topnik, and SUSE team
2. CAB 2019 Survey Results and Future Direction by Michael Maximilien (Max) of IBM
The full agenda can be found here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit
A
A
Recording
and
yeah
I
did
okay,
excellent
Thank
You
Ashley
all
right.
So
this
is
the
first
call
for
the
year.
There's
a
few
things
we
need
to
go
over
and
then
we
of
course,
have
a
really
good
demo.
I'll
do
the
cab
survey
last,
since
we
want
to
give
trying
the
team
as
much
time
as
possible
to
to
do
their
presentation
and
then,
of
course,
the
p.m.
see
you
at
highlights
and
cff
and
so
on.
Right
so
welcome.
A
This
is
2020
Cloud,
Foundry
advisory
meetings
and
we
excited
to
have
you
and
if
you're
listening
on
youtube,
you
can
always
join
by
just
going
to
this
zoom
meeting
at
the
time
provided
and
then
you
can
ask
questions
and
we
have
a
slack
channel
town
CIB,
so
you
can
join
in
the
discussion
there.
So
we
usually
start
with
boundary,
highlights
so
I
think
so
when
I
hear
and
I
saw
chip
also
so
I
think
I.
Guess
when
are
you
added
those?
C
Yeah
so
I
just
wanted
to
quickly
update
on
the
Cloud
Foundry
summits,
but
right
before
we
broke
up
for
the
holidays,
we
had
an
announcement
talking
about
just
savior
to
save
the
date
kind
of
an
announcement
with
Cloud
Foundry
summits.
This
year
we
are
co-locating
with
our
anchor
event
for
Linux
Foundation,
which
is
the
open
source
sumit.
It
used
to
be
called
linux
con
in
the
past,
so
you
may
be
familiar
with
one
or
the
other
name.
It
will
be
on
the
Thursday.
So
right
after
the
opens
our
summit
closes.
C
We
will
have
the
Thursday
focused
on
Cloud
Foundry,
so
much
we
are
working
on
the
CFP,
the
number
of
tracks,
the
registration
details.
All
of
those
kind
of
specifics.
We
will
be
announcing
those
quickly
so
that
people
can
start
planning
for
it,
but
at
least
for
the
dates
it's
the
North
American
one
is
on
June
25th
in
Austin
Texas
and
the
European
one
is
in
on
August
sorry
October
29th
in
Dublin
Ireland.
C
So
at
least
you
can
save
the
dates,
hopefully
in
the
next
week,
or
so
we
will
get
the
rest
of
the
details
announced
as
well.
I
also
wanted
to
share
one
other
thing
for
anyone
here
who
has
the
bandwidth
or
who
has
the
time
and
who's
interested
in
in
becoming
a
mentor.
This
is
more
like
for
students
in
remote
locations
in
various
geographies.
They
have
submitted
interest
in
Cloud
Foundry
as
a
project
through
the
Linux
foundations,
community
bridge
initiative.
C
So
I
am
talking
to
pretty
much
all
the
35-ish
people
that
submit
an
interest,
and
most
of
them
want
to
start
contributing
to
Cloud
Foundry
in
some
fashion.
Some
of
them
have
mentioned
these
specific
projects
like
the
CLI
or
ireenie,
the
quarks
kind
of
projects
as
the
areas
of
interest
that
they
would
like
to
start
contributing.
So
if
you
are
interested
or
if
you
know
someone
on
your
team
that
is
interested
in
or
and
is
able
to
mentor
folks
that
come
in
from
that
program,
let
me
know
just
drop
an
email.
C
I
will
start
making
that
introductions
and,
as
a
last
thing,
we
have
a
community
calendar
on
our
website,
I
added
the
link
as
well.
It
has
the
links
and
the
meeting
information
for
all
of
these
meetings.
Just
a
quick
note
on
some
of
the
upcoming
meetings.
We
have
the
Bosch
pmc
meeting
tomorrow
at
11:00
a.m.
you
are
specific
and
we
have
the
Sigma
ting
co-founder
for
kubernetes
Sigma
ting
next
week
or
next
Tuesday.
C
We
have
two,
the
Sigma
ting
and
the
bi-weekly
app
runtime
PMC
meeting,
and
we
also
have
the
Cloud
Foundry
operators
cig
meeting
next
Wednesday
so
feel
free
to
add
the
calendar
to
your
own,
and
there
are
also
channels
for
those
sig
meetings
in
Cloud.
Foundry
select
so
feel
free
to
join
the
channel
and
ask
questions
or
look
for
the
agenda
and
such
any
questions.
I
was.
A
C
We
will
probably
reduce
the
number
of
tracks
because
it
just
doesn't
make
sense
to
have
all
seven
or
eight
tracks
that
we
used
to
have
in
the
past
across
two
days,
and
those
are
the
kind
of
details
that
we're
still
trying
to
figure
out
today
and
tomorrow
we
are
gonna,
have
those
discussions
and
see
how
many
rooms
do
we
get?
How
many
tracks
can
we
host
what
are
the
most
meaningful
tracks
and
such
and
announce
those
as
I
think
as
late
as
next
week,
hopefully
but
yeah?
C
A
C
D
B
A
A
A
C
E
Cool
the
other
thing
that
I'll
add
to
that.
Yes,
water
can
clarify
if
I
say
it
incorrectly,
but
we're
also
programming
the
two
cloud
tracks
for
open
source
summit.
So
you
know,
if
you're
planning,
to
come
to
the
clarity
part
of
the
event
which
will
be
focused
on
contributors
and
focused
on
the
end.
Users
is
the
two
primary
audiences
there's
also
going
to
be
a
lot
of
content,
that's
very
relevant
during.
E
A
E
B
C
B
C
A
F
Okay,
yep
yep,
oh
yeah,
I'll
comment
briefly
on
some
of
the
activity
that
going
on
in
the
runtime
PMC.
So
a
lot
of
it
actually
has
been
focused
on
development
around
kubernetes
and
getting
the
system
to
run
there.
So
I
release
integration
in
particular
has
been
focused
on
this
problem.
Providing
integration,
support
to
one
of
the
component
teams
as
they've
been
focusing
their
own
efforts
on
getting
kids
to
playable
artifacts
for
their
components
and
adapting
them
to
interact
with
kubernetes
and
so
I.
F
Think
psy
is
planning
on
sending
out
a
proposal
for
the
first
phase
of
that
effort
and
along
the
lines
of
the
component
teams,
Kathy's
really
been
focused
on
that
both
building
out
their
criminal
use
deployment
artifacts
and
making
some
initial
progress
through
their
milestones.
I'm
in
for
the
Netscape
act,
do
you
run
plug
native,
don't
packs
for
staging
tasks
and
likewise.
A
F
Released
an
initial
version
of
their
component
that
synchronizes
CF
private
folk
to
us
do
incriminate
these
resources
and
they're
continuing
to
refine
that
and
I
know
also
work
on
the
logger
Gator
team
history
around
both
logs
and
metric
integration.
On
top
of
that
gate,
substrate.
So
there's
a
lot
of
parallel
activity
kicking
off
then
on
those
teams.
I
need
to
get
that
over
able
to
get
them
into
something.
Somewhat
worked:
okay,.
D
Yes,
Nick,
hey,
you
might
have
said
we're
at
the
top
and
I
apologize
and
packaging
of
things
towards
you
know
kubernetes,
obviously,
partly
its
docker
part
is
everyone
gonna
be
using
build,
packs
and-
and
you
know
so-
some
be
up.
For
example,
Cloud
Control
is
written
Ruby,
we
don't
have
a
Ruby
bill
pack,
but
will
everything
be
using
bill
packs
or
will
some
not
yeah.
F
F
D
B
I
just
wanted
to
dr.
Nick
I
think.
Obviously
right
ruby
is
very
much
on
our
radar
in
terms
of
priorities.
We
know
a
lot
of
the
components.
Look
to
would
need
the
Ruby,
C
and
B
in
order
to
make
use
of
the
new
clan
about
packs,
so
that
is
actively
on
our
roadmap
right
now,
look
at
the
wrap-up,
the
dotnet
core
in
PHP,
and
then
Ruby
is
looking
to
come
very
close
after
that.
So
stay
tuned,
I
guess
thanks.
D
Mike
Eric:
can
you
give
a
quick
output
on
how
you
intend
to
solve
the
upcoming
issue
around
blob
stars,
because
if
we
are
on
a
coop
native
world,
I
think
everybody
agrees
that
the
existence
of
a
blob
store
is
probably
not
like
to
have
everything
on
image.
Registry
and
I
know
that
there
is
some
work
that
has
been
some
work
happening
and
I
know
that
there
has
been
some
work
at
the
top
controller,
but
a
doctor
keeping.
Can
you
maybe
quickly
outline
how
that
strategy
is
supposed
to
come
together
and
in
this
context,
yeah.
F
I
mean
I,
think
you're,
right,
cog
controller
is
what
the
only
client
of
that
blob
store
and
to
my
knowledge
feedback
and
then
k-pax
for
the
consolidation
of
resources
on
the
OCI
image
registry
solves
a
lot
of
that.
There's
still
a
few
loose
edges
may
be
a
resource
matching
is
the
main
one
that
comes
to
mind
so
I,
don't
know
the
exact
plans
for
removing
that
dependence,
but
that's
probably
something
to
talk
with
Zack
and
Scott
about
specifically
and
will
crack
normal
as
one
of
the
other
engineering
leaders
on
that
side.
F
B
F
D
But
do
I
get
this
right
that,
in
consequence,
that
means
that
one
of
the
upcoming
CF
CF
for
an
application
platform
for
kubernetes
distributions
or
cmf
is
whatever
the
name
of
that
thing
will
be
right.
The
the
kubernetes
based
one
will
that
then
package
any
like
Capek
as
a
as
a
additional
component
is
parts
or
how
is
that.
F
Well,
I
believe
that
any
any
distribution
of
a
CF,
this
incorporating
Capac
would
then
have
a
distribution
of
a
dessert.
That's
my
understanding
of
the
current
plans
been
to
the
extent
that
can't
be
irrelevant
have
been
discussing
how
to
get
a
more
coherent
artifact
in
the
community
that
incorporates
that
work.
F
I,
think
pigment
planning
jointly
so
have
kickback
incorporated
there,
and
they
may
just
be
some
discussion
with
like
which
side
it
falls
on
were
to
be
bundled
directly
into
the
cloud
controller
deployment
resources,
or
would
it
as
part
of
some
integrated
set
of
modules?
That
would
be
more
understated
relevant
on
our
control.
Okay,.
B
A
Okay,
cool,
so
I
pinged
Morgan
he's
been
a
bit
sick,
so
he
did
send
me
the
update
for
blush
and
it's
pretty
straightforwardly
it's
busy
I
guess
so
slow
week,
sorry,
which
is
expected.
This
is
the
second
week
the
holidays,
but
he
mentioned
that
they've
been
working
on
exposing
director
metrics
things
like,
for
instance,
number
of
unresponsive
VMs.
A
Is
that
all
three
sorry
resurrection
status,
number
of
questions,
max
number
of
cute
tasks,
etc.
So
I'd
recommend
you
to
join
the
Bosh
PMC
meeting,
which
is
this
Thursday
at
11:00
a.m.
Pacific.
If
you
have
more
questions
or
if
you
have
questions
I
guess
Morgan,
okay
for
extensions,
it's
definitely
been
in.
You
know
spill
and
we
have
not
had
our
first
meeting,
which
will
happen
at
the
end
of
the
month.
So
hopefully
we'll
get
you
some
dates.
Next
year
next
month,.
A
Yeah
I
know
rush
I
think
the
important
thing
also
to
mention
for
people.
If
you
have
not,
you
know
gotten
a
status
for
extensions
since
a
few
months
ago
that
we
have
three
projects
that
move
to
the
Attic,
so
those
projects
are
moving
to
boundary
and
it's
mainly
lack
of
interest
or
also
I,
guess
you
know,
experiments
that
maybe
didn't
pan
out
as
much
as
we
expected.
A
H
A
B
H
So
this
is
a
presentation
about
cube,
C
F
and
our
into
our
team's
intention
to
bring
it
upstream,
it's
a
little
bit
of
internal
evangelism,
so
everyone
knows
we're
out
what
we're
up
to
what
our
intentions
are,
so
that
we
don't
duplicate
effort.
I
was
listening
very
attentively
to
what
Eric
was
talking
about
with
the
upstream
teams
kubernetes
vying
and
making
the
components
not
only
kubernetes
native
but
kubernetes
idiomatic.
H
H
It's
a
github
repo
and
a
number
of
public
build
pipelines
which
you
might
not
necessarily
know
about,
but
you
will
see
the
results
of
in
that
in
the
git
repo
it's
currently
at
the
under
the
Sousa
org
souza,
slash,
cube,
CF,
and
that
grew
out
of
the
our
v3
branch
of
Sousa
SCF,
which
some
of
you
may
recall.
This
is
the
basis
of
our
cloud
application
platform.
H
Distribution,
our
intention
is
to
get
this
upstream,
we'll
talk
a
little
bit
about
how
we
do
that
later,
but
its
function
as
a
central
point
for
bringing
together
components
for
for
kubernetes
distribution
is
is
like
CF
deployment,
so
it
brings
together,
but
because
it's
sort
of
downstream
from
CF
deployment
it
actually
relies
on
CF
deployment
at
the
moment,
Tullio
and
Jeff
or
Vlad.
Please
feel
free
to
join
in
and
correct
me
if
I'm
wrong.
H
In
fact,
we
sort
of
consider
this
right
now
as
part
of
the
quarks
project,
because
cube
CF
relies
on
CF
operator
in
order
to
run
properly
on
on
communities.
In
fact,
it
won't
work
without
it.
If
you
push
the
cube,
CF
helm
chart
directly
without
a
CF
operator
in
place,
it
won't
really
do
anything
and
that's
because
CF
operator
is
in
charge
of
lifecycle
management
for
jib
CF.
It
watches
the
namespace
and
a
CRD
translates
bosh
manifests
into
communities,
manifests
in
real
time.
H
So,
as
I
mentioned,
this
is
it
serves
the
same
function
as
CF
deployment,
but
it
actually
tracks
CF
deployment
as
an
upstream
source
of
truth,
and
this
is
done
by
a
manual
process.
So
we
we
bump
to
specific
CF
deployment
releases
and
that
involves
doing
some
patches
and
Tullio.
Are
you
online?
Did
you
want
to
elaborate
on
this
at
all
yeah.
H
Okay,
some
very
important
questions.
We
need
to
decide
on
as
a
community.
We
we
of
course
have
some
preferences
chip
has
expressed
an
opinion
on
this
is:
where
does
this
go?
Whence
is
Adam
eights
at
upstream?
Is
it
a
Cloud,
Foundry
incubator
project,
or
should
it
be
part
of
the
quarks
umbrella
chip
suggestion
was
that
it
was
a
separate.
H
We
should
be
a
separate
project
under
the
runtime
PMC
we've
had
a
number
of
discussions
with
release
integration
with
site
about
how
this
will
work
with
CF
deployment
and,
of
course
something
we
have
a
little
more
control
over
is
is
when
that
happens.
We
were
very
keen
to
get
it
in
as
soon
as
possible
to
raise
the
profile
of
cube
CF
so
that
everyone
who's
working
on
porting,
CF
components
to
kubernetes,
know
about
it
and
know
how
it
works.
H
We
want
to
get
another
release
out
to
get
things
stabilized
so
that
the
developers
aren't
pulled
off,
answering
questions
where
we
know
that
it's
just
you
know
you
have
to
do
this
little
thing
to
get
everything
working,
so
I
communicated
to
a
couple
people
at
CFS
that
that
we're
gonna
do
it
right
away
we're
probably
going
to
hold
off
for
at
least
one
more
release
of
cube
CF
before
we
donate
it.
We
want
to
get
some
Doc's
better
in
order
only
the
brave
and
intrepid,
if
have
managed,
to
find
our
way
through.
I
know.
Dr.
H
H
G
Cool
you
know
the
check
that
we
all
do
every
time,
I'm
going
to
just
start,
showing
you
guys
a
typical
value
start,
one
more
that
we
use
to
deploy
cubes
here.
This
is
a
this
is
not
the
actual
one
that
Troy
used
to
deploy
on
the
cluster
that
I'm
going
to
show,
but
it's
very
similar
and
the
the
requirements.
The
central
requirement
is
the
system
to
make
that's
what
you
have
to
to
define
right
like
what
founder
cannot
just
guess
the
system
domain,
and
this
is
going
to
go
away
in
0.2.
G
D
G
I'm
using
0.0
here,
because
at
this
point
it's
going
to
be
irrelevant
easier
about
you,
the
reason
it's
there
in
the
first
place,
it
was
a
mistake
that
I
dated
to
to
to
actually
create
a
security
group
to
talk
to
about
red
hub
on
0.1.
We
are
removing
business
here
about
you,
so
at
this
point,
I'm
using
0
that
0
just
to
make
it
easy
to
run
on
that
cluster.
G
G
G
So
these
are
all
the
pods,
so
you
can
see
we
have
the
cube
CF
namespace,
with
all
the
components
that
we
are
used
to
see
with
CF
deployment,
with
that
with
a
minor,
a
changes
like,
for
example,
we
don't
see
the
diego's
itself.
Instead,
we
see
a
greeny
because
we
activated
it
and
we
see
a
few
jobs
out
of
their
original
instance.
Groups
like
the
routing
API
and
they
schedule.
G
What
we
do
have
different
is
this
other
namespace
that
holds
the
the
apps
for
ireenie
and
I'm
going
to
show
you
the
commands
now
that
I
use
to
to
deploy.
This
is
a
similar
cluster
that
Freud,
what
that's
the
one
that
I'm
showing
you
guys.
First,
we
create
the
namespace
for
cube,
C
F,
and
then
we
install
the
separator.
G
We
use
hell
it,
it
should
work
without
passing
any
extra
values,
but
you
can
in
some
cases
it
may
be
useful,
and
then
we
install
cube
CF
here
I'm
using
0.1,
not
chip,
master
and
I'm,
using
that
values
that
I
just
showed
you
guys,
and
then
we
can
interact
with
the
cluster.
So
here
I
just
applied
the
twelfth
neutral
factor
app.
G
G
Same
same
experience
study
that
everybody
please
use
with
not
I'm,
going
to
show
you
now
a
few
pieces
of
code.
So
this
is,
we
have
a
a
bunch
of
ops
files
here
and,
like
I
mentioned
before,
they
are
used
to
either
add
functionality
to
CF
deployments.
That
is
specific
to
kubernetes,
so,
for
example,
of
probes
health
checks
parts
that
we
need
to
open,
October
Nettie's,
and
we
do
this
by
adding
the
parts
properties
for
the
jobs.
G
Another
thing
that
we
have
to
do
like
I
mention
like
patches.
We
have
two
pre-rendered
scripts
and
a
bunch
of
patches
to
trick
functionality
from
some
jobs,
in
this
case
I'm
patching
the
routing,
API
and
changing,
for
example,
the
BPM
one
more
file
in
this
case,
and
then
we
are
able
to
make
it
run
on
two
ns.
H
We
designed
it,
of
course,
before
any
of
the
efforts
of
the
upstream
teams
started
to
turn
their
components
into
kubernetes
native
components.
So
Bosch
was
the
lingua
franca
for
for
what
we've
built
here.
That
said
now
that
this
kubernetes
movement
has
started
happening
upstream,
we're
going
to
also
consume
other
types
of
artifacts
to
combine
them
into
the
release
within
cube
CF,
so
the
first
of
those
artifacts
will
be
held
charts.
H
The
ireenie
team
expressed
an
opinion
that
they
would
rather
deliver
rather
than
a
Bosch
release,
deliver
just
a
helm
with
some
reservations
about
helm,
chart,
sweet
everyone,
who's
working
with
helm,
has
a
few
reservations,
at
least
about
them
or
or,
and
so
cube.
Cf
has
the
ability
to
consume
other
Bosch
releases.
Now
we
are
in
the
works.
We're
working
on,
for
example,
subbing
out
the
the
database
component
instead
of
using
the
one,
the
CF,
my
sequel,
to,
instead
of
using
the
Bosch
release
of
that
trying
out
some
helm
charts
for
some
highly
available.
H
My
sequel
databases
like
Marina
DB,
we
could
add
support
to
add
cumin
and
ECMO
directly
unten
plated
kubernetes
configuration
and
container
images,
and
we've
seen
demos
from
Dmitriy
on
ytt
templated
release
artifacts,
and
we
could
also
consume
those
and
make
those
start
of
a
CF
cube
CF
release.
But
we
would
need
help
from
the
the
teams
that
are
making
those
components
and
making
those
choices
of
how
they
want
to
package.
Their
their
components
would
need
help
in
the
cube
CF
team
to
to
get
those
integrated.
H
But
but
it's
our
intention
to
make
this
extensible
and
it's
our
intention
to
eventually
replace
all
of
the
Bosch
releases
with
kubernetes
native
components,
very
much
built
into
the
design
of
cube
CF.
That
over
time
is,
can
evolve
rather
than
stopping
work
altogether
on
all
of
our
stuff
and
rebuilding
everything
on
kubernetes,
this
Cube
CF
was
designed
to
give
us
a
path
to
gradually
move.
H
H
We
would
ask
anyone
who's
working
on
cube
native
components
or
we've
seen
the
proposal
on
cube,
idiomatic
guidelines,
anyone
who's
working
on
that
stuff.
We
would
love
to
have
some
interaction
with
them
between
them
and
the
cube
CF
team
and
consult
us
about
how
to
integrate
what
they're
doing
into
cube
CF.
That's.
Actually
the
primary
goal
of
this
whole
presentation
is
to
internally
evangelized
this
point
of
contact
between
cube
CF
team,
quarks
team
and
the
component
teams,
and
about
why
what
we
can
do
to
help.
H
Some
of
you
know
that
a
number
of
the
Sousa
team
have
been
working
on
trying
to
get
cloud
foundry
to
run
on
kubernetes
for
about
five
years
since,
before
the
$1
release
of
kubernetes,
we
learned
a
lot
of
painful
lessons
over
that
time
and
we
would
love
to
help
people
avoid
those
and
learn
from
the
experience.
A
lot
of
people
first,
dealing
with
kubernetes
will
assume
that
it
will
all
make
sense
and
everything
will
work
beautifully
because
that's
what
the
hype
is
about.
H
H
Could
definitely
do
that.
I
am
happy
to
of
all
this
I've
also
promised
Catelyn
a
blog
post
about
this.
To
this
all
this
all
started.
This
presentation
started
with
my
post
to
CF
dev
exhorting
people
to
come
in
and
become
familiar
with
cube
CF
and
what
we're
trying
to
do
so
that
we
can
get
a
coordinated
effort
towards
everyone
running
beautifully
on
kubernetes,
so
I
mean.
H
So
we
again
that's
a
good
point.
We
had
to
put
those
in,
and
the
initial
intention
was.
We
can
make
these
useful
for
everyone.
We
could
even
try
and
submit
them
upstream
to
kubernetes,
but
actually
even
within
terms
that
we're
not
completely
unanimous
that
that's
a
good
direction
to
go
in.
Maybe
we
should
just
focus
on
providing
in
the
operator
only
what
is
absolutely
necessary
to
run
run
see
far
on
kubernetes.
H
So
yes,
it's
there
and
it's
good
stuff,
and
it
should
be
a
lot
of
those
things
should
be
kubernetes
features
we'd
love
to
get
those
either
as
independent
operators
or
submitted
upstream.
But
at
the
same
time
we
realize
the
CF
operator
is
very
complicated
because
it's
providing
extended
functionality
to
kubernetes.
We,
ideally
we
wouldn't
have
to
use
so
that
actually
segways
make
things
like
my
first
seed
question
here,
which
is
why
is
it
so
complicated
because
there's
a
lot
of
stuff
that
Bosh
does
that
kubernetes
doesn't
so?
H
And
oh
yeah
I
wanted
to
call
it
the
kubernetes
idiomatic
component
guidelines
and
I
forgot,
who
authored
that
they
are
any
team.
Somebody
remind
me
it's
a
pivot
from
the
other,
any
team
and
it's
being
shopped
around
and
and
we're
contributing
feedback
to
that
people
are
chiming
in
with
how
they
feel
about
those
guidelines.
H
They
are
not
very
opinionated
yet
and
I
think
we
may
get
a
lot
of
teams
departing
in
different
directions
and
doing
things
in
radically
different
ways,
which
is
a
good
learning
experience.
But
I'd
argue
that
we
don't
have
a
lot
of
time
to
learn
this
stuff
and
that
and
that
stricter
guidelines
are
tooling
to
enforce.
A
certain
amount
of
conformity
would
probably
be
helpful.
G
B
G
G
B
G
B
G
G
And
what
Maru
happens,
that
a
inerrant
becomes
a
quark
job,
so
set
operator
detects,
collects
like
cooking
and
arrange
a
trade
secret
job,
but
the
quarks
job
is
not
triggered
automatically,
and
this
is
why
we
have
to
to
round
this
patch
on
the
job,
and
we
are
basically
saying
strategy
now
and
what
it
does
is
the
controller
for
the
courts.
Job
creates
a
job
that
actually
runs
in
the
cluster,
the
tests
and
that's
we
can
make
a
week.
D
G
G
G
H
G
H
Yeah
and
that's
where
we
that's,
where
a
lot
of
the
discussion
came
up
about
how
we're
going
to
to
keep
keep
our
efforts
in
line
with
the
efforts
of
the
component
teams.
I
just
wanted
to
I
missed
a
slide
on
my
way
earlier.
Just
a
bit
of
a
timeline
100
of
C
F
operator
was
December.
20Th
we've
had
another
really
since
cube
CF
we're
gonna
have
a
zero
dot,
2.0
very
soon
lots
of
great
feedback
from
the
community,
notably
from
dr.
NIC.
H
But
what?
What
good
timing
for
that
is,
if
it's
good
to
announce
that
it
at
Sousa
con
or
some
other
thing,
but
yeah.
We
just
want
to
get
some
more
documentation
in
place
so
that
people
don't
get
confused
when
they
come
to
the
repo
when
they
know
what
to
do
know
how
to
run
the
acceptance
tests
and
all
those
things.
H
B
A
A
A
Still
in
the
past
three
years,
I've
done
this
survey
and
I've
been
running
these
meetings
and
I'm
happy
to
do
it,
but
obviously,
at
some
point
maybe
it
makes
sense
to
change
and
that's
one
of
the
big
difference
within
with
this
survey
versus
previous
surveys.
But
let's
go
through
the
results
and
then
we
can
hopefully
have
like
a
couple
minutes
to
discuss.
So
I
have
seven
minutes
on
my
clock,
so
I
send
this
out.
Hopefully
you
guys
took
it.
A
We
had
about
20
for
response,
but
the
goal
is
to
do
kind
of
an
informal
retro
and
collect
feedback
and
I
the
the
responses
are
anonymous,
but
some
people
actually
volunteer
their
names
like
Sorna,
but
you
know
the
point
is
to
try
to
find
out
information
right
from
the
community
and
see
what
works
and
what
doesn't
work
and,
let's
see
if
we
can
adjust,
we
do
it
once
a
year.
So
it's
not
too
much
to
ask
at
all.
A
So
the
first
question
was:
have
you
attended
the
cab
call
in
2019
just
to
get
a
feel
for
how
many
people
attending
and
how
much
are
they
attending?
What
did
you
enjoy?
The
most-
and
this
is
a
new
question-
does
it
need
new
leadership,
the
cab?
You
know
the
community
calls
and
then
construct
constructive
feedback.
I
keep
it
short,
because
I
want
people
to
actually
take
the
survey
because,
as
you
know,
you've
taken
surveys
when
they
have
a
lot
of
questions
you
stop
halfway.
So
these
are
the
results.
A
We
had
only
41
percent
completion
rate,
but
this
is
the
highest
we've
ever
had.
So
this
is
pretty
good
34
responses
and
it
took
about
20
seconds
on
average
for
people
to
respond.
So
if
you
didn't
take
it
thinking,
it
would
take
you
a
lot
of
time.
Just
know
it's
very
quick
next
time.
So
these
are
the
responses
for
the
first
question.
A
You
can
see
that
we
only
have
a
few
regular
participant
and
a
few
and
the
majority
are
attending
every
now
and
then
I
guess
it
makes
sense,
because
people
look
at
their
schedule
and
look
at
what's
gonna
be
presented,
and
if
it's
not
of
interest,
then
they
don't
attend
like
today.
It
was
clearly
a
good
attendance
because
a
lot
of
people
want
to
hear
about
coops
yep,
so
that
was
that
was
good.
A
So
these
are
the
results
and
I
think.
Maybe
we
had
one
every
questions.
I
allow
people
to
put
some
text
and
you
can
see
that
some
people
say
they
read
the
call
notes
and
that's
the
one
comment
that
came
so
question
two.
What
did
you
enjoy
the
most
and
what
I
did
is
I
looked
at
the
responses
and
I
cab,
you
lated
the
words
and
did
a
count
and
then
a
tag
cloud
which
I'll
show
you.
A
So
people
definitely
love
demos,
and
you
can
see
right
like
the
the
presentation
today
with
the
demo
that
Toby
I
did
it
with
Troy.
It
really
helps
you
know
kind
of
make
it
interesting
for
folks
so
clearly,
keeping
presentations
that
include
demos
is
key.
Obviously,
people
want
to
know
about
the
projects
and
about
the
community.
What's
going
on
and
the
presentations
I
only
listed
the
top
five
of
the
words
out
of
the
responses
and
people
loved
the
updates.
A
So
these
are
some
highlights
from
the
responses
somebody
says
always
cool
I,
don't
know
if
it's
water
but
I'm
guessing
it's
her
she's
been
very
positive.
I
love
that,
but
you
know
we
want
to
also
hear
the
negative
part,
or
at
least
the
things
that
we
can
improve.
But
it
seems
like
these
are
kind
of
the
highlights.
A
I
just
looked
at
the
responses
and
picked
the
highlights
towards
the
end
of
the
slides,
I
pasted,
all
the
responses
wore
the
data,
so
you
can
take
a
look
at
up
all
right,
so
question
4-
and
this
is
important
to
me
because
I'm
the
one
that's
asking
it
now
discuss
it
towards
the
end.
But
let's
look
at
the
results,
so
people
say
you
know
about
20%
said
yes
and
then
the
rest
said
no,
because
I
made
this
too.
A
You
know
yes,
no
question,
and
these
are
some
of
the
highlights,
because
this
question
only
had
yes
or
no
and
then
a
way
to
provide
feedback
to
people
say.
Oh
I'm
sure
somebody
said
new
blood
I
love
that
it's
not
me,
I
didn't
take
the
survey
and
there
are
a
couple
people
saying
really
good
things
about
me.
I
appreciate
that,
but
I
think
you
know
the
this
call
works
because
of
you
attending
and
people
presenting.
So
how
do
we
move
forward?
A
B
B
A
Still
attend
also
right.
Ok,
so
let's
look
at
the
constructive
feedback,
Thank
You
Wayne.
So
this
is
you
know
what
constructive
feedback
you
have
and
seem
that
people
said
no
none.
So
basically
they
love.
What's
going
on
and
of
course,
presentation
was
an
item
that
came
up
and
I'll
show
you
some
of
the
text,
more
I,
think
presentation
and
team
means
and
good.
So
these
are
were
the
words
that
featured
the
most
out
of
the
responses
and
I
kind
of
like
summarized
it
as
none
more
team's
presentation,
good
right.
A
So
that's
kind
of
how
it
worked
out
the
data-
and
this
is
kind
of
the
tag
cloud
and
you
can
see
that
people
are
not
asking
for
any
changes.
They
want
more
presentations
and
probably
teams
updates
as
well,
so
that
that
was
a
cloud.
The
tag
cloud.
So
these
are
the
highlights
from
the
responses,
keep
up
the
good
work.
None,
perhaps
support
Abby
or
chip
could
give
a
cab
reinvigorated
interest.
Maybe
they
could
guess
present.
So
here
is
one
thing:
you
know:
feedback
for
chip,
I'll
ping
them
and
see.
A
If
they
make
you
see
see
if
it
makes
sense
for
the
next
one
they
could
come
in
present,
but
chip
is
always
there
and
Abby
does
in
AME.
So
then
you
know
I'm
not
so
sure
how
to
how
to
take
this
one,
but
that
was
one
of
the
feedback.
This
one
I
was
also
interesting.
Like
maybe
ask
individual
component
seems
to
also
present
there
would
maps
in
more
detail.
I
think
that's
actually
a
excellent
feedback
in
the
sense
that
you
know
getting
a
you
know
p.m.
A
present
there
would
map
instead
of
just
saying,
hey
here's
the
word
map
will
read
it
or
attend
a
bunch
of
course
to
get
the
word
map.
This
call
might
be
a
good
place
to
actually
present
or
would
match
so
I
thought
that
was
really
nice
and
then
one
person
said
there
was
a
negative.
What
is
CF
good
for
torn
into
cabs
that
I
attended
I
definitely
would
not
want
this
to
be
the
case
if
that
was
a
case
and
who
definitely
tried
to
figure
out
how
to
fix
that.
B
I
could
definitely
see
that
tone
perhaps
bleeding
through.
Is
everybody
questions
the
future
of
where
we're
going
with
things,
especially
in
light
of
kubernetes
adoption
and
all
this
other
stuff?
So
you
know
try
to
figure
out
the
differentiation
and
I
suspect
something
that
tongue
might
come
from
those
style
discussions
and
one
ponder
yeah.
A
Yeah,
that's
good
feedback
I.
This
good
point,
where
I
think
you
know
2019
was
probably
the
pivotal.
Well,
you
know
no
pun
intended
year
for
Cloud
Foundry
in
a
sense
of
figuring
out
that
kubernetes
strategy
and
you
can
see
with
Cube,
CF
and
Yuri
me
and
all
the
other
projects
we're
sort
of
getting
there
right.
A
So
hopefully
that
tone
will
sort
of
disappear
and
that
foundry
will
be
what
it
is
in
the
middle
of
the
rest
of
cognitive
and
within
kubernetes.
So
that's
the
results.
These
are
the
answers,
complete
answers.
If
you
want
to
read
them,
I
pasted
the
link
in
the
cab
channel
and
also
in
the
agenda
and
then
I'll
paste.
It
again
other
places
if
you
want
but
I,
think.
Let
me
open
it
for
questions.
A
We
are
out
of
time,
but
I
think
one
of
the
things
that
I
want
to
try
to
do
is
to
see
if
people
would
want
to
volunteer
to
to
either
replace
me
or
code.
You
know
lead
it
with
me
until
the
point
where
they
can
take
it
over
I
mean
I've
been
doing
this
for
three
years.
That
would
be
the
foot
here
and
I'm,
not
tired
of
it
I'm
happy
to
to
meet
with
Wayne
and
dr.
A
Nick
and
everybody
else
right,
but
at
some
point
it
makes
sense
to
give
other
people
an
opportunity
and
there's
probably
like
new
ideas
right
that
could
come
in
and
I'm
involved
and
other
things.
Now,
mostly
you
know
key
native
and
and
so
on.
So
you
know,
I,
don't
have
a
lot
of
information.
I
don't
have
a
lot
to
add
all
I'm
doing
is
pretty
much.
You
know,
organizing
and
providing
some
input
here
and
there
I
think.
C
A
Cool
yeah,
I
think
I,
think
that
makes
a
lot
of
sense
Mona.
So
if
you
want
to
volunteer
somebody
or
better,
if
you
want
to
volunteer
yourself,
then
ping
me
or
pings
for
now
and
then
we'll
see
if
we
can
tag
you
in
the
next
one
or
the
one
after
the
next
one.
So
with
that,
we
are
six
minutes
over
time.
Well,
officially,
two
minutes,
since
we
started
three
minutes
after
eight
and
thank
you
for
joining
Thank,
You,
Troy
and
Talia
for
presenting
and
everybody
else
and
we'll
see
you
in
February
take
care.
Everybody
hi.