►
From YouTube: Cloud Foundry Community Advisory Board Call [Sept. 2018]
Description
Get the agenda here: https://bit.ly/2QG0J9b
A
Okay,
welcome
everybody
for
I,
guess:
September
cat
call
it's
one
month
before
well,
a
little
bit
less
than
women
or
the
summit
in
Basel,
so
I'm
not
sure
how
many
people
will
join
this
one
but
looks
like
gathering
a
group,
and
there
are
really
two
good
calls.
So
if
you
are
here
to
listen
to
those
calls,
they're
gonna
start
soon.
A
A
A
A
D
D
Where
we
were
last
year
at
this
time,
but
I
hear
from
the
events
team
that
we
are
trying,
we
have
been
trending
higher
all
through
ever
since
the
registration
started.
So
basil
is
looking
good.
The
schedule
and
everything
else
is
final,
so
I
added
links
for
trainings
and
the
rest
of
the
days
of
your
activities
and
if
you
are
interested
in
the
hands-on
labs,
just
go
to
the
schedule
and
filter
by
the
hands-on
lab
that
you
will
see
the
list
of
the
long
list
of
hands-on
labs
that
will
be
offered
in
the
foundry.
D
The
other
thing
that
I
also
wanted
to
highlight.
I,
don't
know
how
many
of
you
were
in
Boston
and
it
did
or
did
not
take
advantage
of
the
headshot
sessions.
We
are
also
offering
there
again
as
a
complimentary,
had
headshot
session
again
and
also
if
you
want
to
get
a
professional
picture
taken.
Please
big
advantage
of
that
and
thank
you
for
all
the
nominations
we
receive
for
the
Community
Awards
really
was
impressed
with
the
amount
of
nominations
we
got
for
the
Community
Awards.
It
was
definitely
a
lot
higher
than
what
we
see
per
Boston
said.
D
I
can't
wait
to
announce
on
this
on
Thursday
of
the
summit.
We
also
have
announced
that
Cloud
Foundry
summit
North
America
for
2019
mark
your
calendars
for
April,
2nd
through
4th.
It's
gonna
be
in
Philadelphia
chips
hometown
and
we
will
open
the
registration
and
call
for
papers
next
month,
Clark
boundary
days.
We
are
doing
this
again
as
a
day
zero
activity,
alongside
cuke,
on
both
in
Shanghai
and
Seattle.
The
call
for
papers
has
closed
for
Shanghai,
but
it
closes
soon
on
September
28th
for
Seattle.
D
D
We
also
launched
an
updated
community
page
at
the
link
and
we
also
have
started
this
webinar
series.
We
had
a
couple
done
in
the
past
and
we
have
one
upcoming
on
October
4th
on
project
irony,
so
if
you're
interested
in
those
or
if
you
would
like
to
share
those
links
as
well
with
your
folks
or
with
your
peers,
feel
free
to
do
so.
A
Everybody
here
at
coming
to
puzzle,
cool
I
mentioned
this
last
time,
but
you
know
for
people
that
have
never
been
to
Switzerland.
There
is
a
place
called
lot
of
women
that
were
planning
to
go
to.
It
is
just
mind:
blowing
I
was
there
last
year
and
I'm
going
again
so
plan
your
time
and
visits
cool
okay
about.
A
B
F
Sure,
let's
see
a
few
things
that
come
to
mind,
see
of
deployment
cut
major
version
for
last
month
and
they
have
released
some
plans
around
what's
gonna
change
for
version
5
at
the
end
of
this
month.
Again,
these
are
usually
little
incremental
improvements
for
their
breaking
changes,
so
that
are
trying
to
adhere
to
or
rigorous
semantic
versioning.
F
A
couple
of
the
thing
this
cog
controllers
been
changing
the
built
back
model
to
support
associated
specifics
acts
and
look
all
the
CF
fill.
Acts
have
been
updated
with
support
for
the
CF
Linux
FS
three
second,
so
we're
continuing
to
move
forward
in
terms
of
the
transfer
to
that
before
see.
If
Linux
FS
two
goes
out,
support
with
trusty
tar,
I'm
next
April
and
the
logger
Gator
team
is
also
investigating
what
it's
going
to
mean
for
to
have
a
logging
isolation
segment.
F
E
G
Actually
have
I
have
one
question:
I'm
totally
I.
Didn't,
though,
so
bear
with
me
if
that's
a
stupid
question,
particularly
around
around
the
v3
head
plant,
because
I
probably
should
know
this
and
I
recall
in
the
very
early
times
when
we
started
talking
about
v3
that
there
was
a
a
way
to
find
granularly
control,
like
searches
in
in
the
sense
that
I
could
make
a
curry
and
say
hey
in
this
art
ID,
whereas
in
the
v2
API
it's
it's
all.
This
I
have
to
control
this
through
the
in
mind.
G
E
Recall
the
discussions
about
how
we
might
service
that
that
sort
of
request-
I,
don't
know
if
we
ended
up
so
there
was
space
for
that
sort
of
thing
in
the
API
design,
but
I
don't
know
if
he
implemented
any
of
those
endpoints.
Yet
the
the
I
think
the
the
desire
was
to
only
implement
that
in
in
use
cases
that
make
sense
as
opposed
to
just
implementing
it
willy-nilly,
all
over
the
place
so
that
we
could
optimize
the
performance
of
those
sorts
of
queries.
E
G
F
C
I've
used
the
the
new
CF
3
face
image
for
a
stack,
but
the
act
build
pack
doesn't
work
on
it
and
I
was
told
it's
not
a
supported
bill
pack
or
it's
not
loved
or
I.
Can't
remember
the
word,
but
it's
blocking
me
I
can't
move
to
be
to
the
CF
3
without
that
working,
so
I'd
love
it
to
be
supported.
It's
really
helpful
so
either
back.
You
have
to
move
to
docker
images
for
certain
scenarios
and
the
other
thing
was
the
the
CF
as
the
CF
CLI
moves
forward.
C
There
are
certain
downstream
users
of
it
that
aren't
maintaining
compatible
I'm
coming
hello.
I'll
put
this
so
I
tried
to
move
forward
with
new,
manifest
with
bill
packs,
but
then
the
the
concourse
CF
resource
is
not
packaging,
a
new
version
of
that.
Therefore,
it
complains
that
it
doesn't
know
what
this
is
about.
So
therefore,
I
had
to
go
back
to
using
the
old
schemer,
because
I
did
talk
to
the
new
p.m.
C
C
Therefore,
perhaps
the
CF
team
should
own
CI,
you
know
experiences,
they
said
no,
so
that
just
means
that
whether
it's
concur
or
everyone
else
is,
you
know
if
we
don't
keep
maintain
backward
compatibility,
we're
going
to
have
users
using
one
version
of
the
CLI
and
CI
systems
having
a
very
older
different
version
or
someone
has
to
maintain
Forks
or
whatever,
and
if
that
didn't
make
sense,
I
apologize.
I
can
explain
it
some
other
way
to
whoever
wants
to
hear
I
guess.
E
I
think
that's,
that's
still.
You
know
the
scope
of
that
team
and
whether
or
not
they
they're
maintaining
it.
They
don't
know
who
created
the
optional
pack
and
then
therefore,
the
the
question
of
maintaining
it,
but
but
I
do
agree.
There
there's
some
usage
to
it.
So
I'm
not
quite
sure
like
at
least
with
the
school
about
two.
C
F
A
H
I'm
here
hey,
how
are
you
I'm,
good,
I'm,
good,
all
right,
so
few
updates
from
Marco
this
time?
First,
we
are
right
now
having
thought
about
the
potential
incubation
for
a
one-way
cloud.
Cpi
discussion
is
still
on
the
way,
so
we
expect
to
wrap
this
soon.
So
if
there's
anybody
from
Huawei
or
connected
to
our
way
here
on
the
call
we
haven't
forgot
about
about
devote,
we
are
simply
having
those
discussions.
H
Also,
we
are
currently
working
on,
I
would
say,
official
statements
for
our
support
policy
for
stem
cells,
the
director
and
Bosch
deployment
itself.
So
we
just
want
to
make
clear
what
the
support
expectation
is
there
for
those
three
distinct
artifacts.
And
so,
if
you
have
any
input
about
that,
for
example,
we
know
that
some
people
rely
a
lot
on
Bosch
deployment
and
see
in
it
a
kind
of
official
way
to
deploy.
Bosch.
Some
other
people
have
a
different
opinion.
H
So
we
just
want
to
listen
to
you
guys
and
and
make
sure
that
whatever
whatever
route
or
whatever
we
do
going
forward
will
be
will
be
explicitly
defined.
So
we're
probably
somewhere
on
budget
at
I/o
when
you
download
the
stuff
or
something
else,
so
any
input
that
you
have
these
reach
out,
to
Morgan
myself
or
to
your
team
members
like
Danny,
and
let
us
know
what
you
think
we
are
looking
forward
to
hearing
from
you.
Apart
from
that,
the
team
has
been
busy
delivering
a
small
small
feet
in
the
last
few
weeks.
H
Bug
fixes
and
things
like
that.
Big
chunks
of
work
I
had
our
improvements
to
DNS
LTS
in
San
Francisco
and
in
Toronto,
we'll
be
delivering
things
around
better
information
in
the
drain
scripts.
To
ensure
that
you
can
make
better
decisions
about
what
to
do
in
those
scripts
and
will
deliver
things
around.
A
B
A
I
Thank
you
for
Eric
for
the
new
stuff.
I've
always
wondered
what
is
the
difference
between
the
default
of
the
blush
release
and
the
default
that
are
implementing
the
impose
deployments
and
I
actually
actually
I
asked
the
question
to
Dimitri,
but
never
had
an
answer.
So
what
what
would
you
say
about
that.
I
D
I
Was
true
in
one
of
those
and
false
in
like
Bastian
s
implemented
by
default
in
bosch
deployments,
so
I
submitted
the
pull
requests
in
the
documentation?
Saying?
Ok,
let's
document
that,
because
it's
implemented
in
first
opponents
and
Dimitri
said
no
I'm
refusing
these
two
requests,
because
it's
not
the
default
in
the
perfidy
okay.
H
In
the
case,
in
the
case
of
DNS,
I
would
get.
I
would
guess
that
we
set
the
the
default
in
barcia
plumbing,
yes
to
ensure
that
people
will
try
it
adopt
it
and
the
default
in
the
release
still
stays
to
know,
maybe
for
historical
reasons.
But
this
is
something
probably
that
we
would
be
willing
to
bring
in
line
and
going
forward.
I
mean
that's
as
part
of
our
discussion
about
the
role
of
Bosch
deployment,
and
here
we
don't
have
a
definite
answer.
We
are
still
discussing
Italy
among
ourselves,
but
yeah.
H
I
Great
and
we
are
expecting
improvements
in
in
the
drain
script,
because
they,
the
I,
have
documented
the
the
hack
that
the
system
is
becoming
zero
to
know
that
the
anode
is
going
away
forever
and
we
we
would
like
to
have
sure
we
need
something
better
for
inline
in
a
memory.
Databases,
for
example,
that
don't
have
any
persistent
disk
yeah.
H
And
in
this
case
we
are
currently
planning
to
return
information
about
the
few.
You
know
current
and
future
state
of
the
job
instance
group
their
instance,
and
then
you
can
make
very
granular
decision
about
what
you
do.
So
you
should
be
happy.
We
are
busy.
There
was
a
bus
note
on
the
topic,
but
we
are
currently
working
on
updating
it.
Once
this
is
done,
we
will
move
the
content
to
a
public
Google
Doc
to
listen
to
feedback
from
the
community.
Ok,
great
thank.
A
No
all
right
so
for
extensions
I'll
mention
just
one
thing:
the
abacus
team.
This
is
the
metering
engine
for
foundry
the
submitter
proposal.
I
would
ask
you
to
take
a
look
at
it,
because
this
is
essentially
the
next
version
of
a
package
that
you're
working
on.
If
you're
interested
in
that
project
come
to
the
sea
of
extensions
BMC
calls
every
month
and
you.
A
J
A
J
Which
is
a
design
of
service
fabric,
so
these
are
the
countries
that
I
will
be
talking
about
why
the
new
design
was
introduced.
What
was
motivation
basically
and
then
what
it
offers,
and
briefly
the
high-level
architecture
and
the
capabilities,
and
maybe,
if
time
permits
that
will
also
have
a
very
short
demo.
So,
let's
quickly
see
so
the
main
motivation
behind
the
service
fabric
was
the
following,
so
we
wanted
to
basically
integrate
new
provisional,
as
you
probably
would
know
that
service
traffic
at
this
point
supports
Bosch
based
provisioning
and
this
provisioning.
J
We
also
wanted
to
integrate
new
provisional
and
we
wanted
to
basically
find
out
how
easily
somebody
and
on
the
similar
line,
we
would
also
wanted
to
have
new
backup
my
mechanism
or
integrated
in
post
of
this
traffic,
along
with
the
one
that
we
already
have
something
like
VBR
or
shield.
Also,
we
wanted
to
have
a
even
ribbon
architecture
which
is
easily
capable,
and
you
can
do
lot
of
stuff
like
monitoring
and
avi.
J
J
Basically,
we
introduced
surface
traffic
to
Garko,
which
is
basically
a
control,
loop
based
architecture
and
even
River.
We
use
Kuban
in
this
API
server
for
the
eventing,
so
how
it
basically
works
is
the
broker
basically
sends
the
events
to
the
API
server.
So
the
broker
basically
is
a
API
controller
layer
which
only
sends
the
events
to
the
API
server
and
you
have
different
operators
which
basically
watches
and
listens
to
these
events,
processes
them
so
going
to
the
high
level
architecture.
J
So
so,
if
you
see
on
the
top,
we
have
different
consuming
platform,
speed,
Cloud,
Foundry
humanities
and
maybe
more
so
in
the
service
broker
framework
we
offer
two
types
of
API:
x1
is
the
blueberry
peas,
which
are
confirming
to
OSP
API
and
which
basically
offers
crud
operations.
On
the
surface
instances
I
find
on
my
subject
like
that,
and
we
also
have
some
extension.
Api
is
like
for
doing
backup
and
restore
and
stuff
like
that,
which
could
eventually
become
actions.
Api
and
probably
always
be.
Api
spec
comes
with
action
so
how
it
works.
J
The
new
design
works
is
like
whenever
create
or
an
update
request
comes
the
API
controller,
basically
sensed
an
event
through
the
custom
Vsauce
definitions.
It
creates
a
resource
in
the
API
server.
Of
course,
the
idea
is
already
second
backed
by
the
HTTP
the
packet
and
then
on
the
operated
side.
We
have
currently
Porsche
operative
and
opera
operator,
and
somebody
can
also
write
different.
J
Other
operators,
like
you,
can
write
an
operator
for
200
his-
service
provisioning,
so
the
these
operators
basically
are
watching
and
listening
to
the
APS
of
resources
and
any
changes
to
them
and
then
based
on
that
it
reacts,
and
he
does
the
service
publishing
lis
are
things
like
provisioning
or
things
like
that.
So,
in
a
nutshell,
the
capabilities
that
it
offers
so
integrating
new
provisional
becomes
easy.
Now
that
flow
is
decoupled
from
the
service
traffic
stream
code
and
also
many
new
capability.
A
new
mechanism
like
backup
and
restore
also
becomes
easy.
J
We
also
maintain
a
minimal
state
which
reduces
some
of
the
platform
dependency
for
some
of
the
metadata
that
we
need
to
acquire
something
like
plan
ID
or
service
ID
for
a
particular
service
instance.
Also.
The
fourth
point
is
very
important:
is
you
can
actually
write
new
operators
which
can
be
in
any
language?
So
it
also
enables
polyglot
programming,
and
somebody
can
also
contribute
to
service
fabric
and
being
in
new
revisionists.
J
I
K
Okay,
hi
everyone.
This
is
Kate
kia
from
service
rabbit
team,
so
I
will
just
quickly
show
you
a
demo,
so
I'll
just
explain
take
a
moment
to
explain
what
these
fails
represents.
So
here
on
left-hand
side,
there
is
a
px
server
that
I
will
show
how
the
resources
are
getting
populated,
how
their
state
change
and
then
here
over
here
I
have
a
platform
which
is
EF
for
now,
and
then
we
have
a
back
end
in
case
of
Bosch.
I
will
show
how
the
Bosch
deployments
are
getting
spin.
K
So,
let's
start
I
will
show
you
the
marketplace
here.
So,
as
you
can
see,
we
have
already
a
blueprint
service
which
is
also
open
source.
So
it
is
a
dummy
service.
That's
on
this
fabric
provides,
and
it
has
various
plans,
mainly
docker
and
then
Bosh
base.
So
we
can
create
our
service
instance
of
this
blueprint,
instant
service.
K
K
That
the
deployment
is
getting
created
so
probably
reasons
we
do
not
have
much
time.
I
I'll
just
skip
this
part,
so
this
will
get
created
so
I
just
wanted
to
show
the
you
can
see.
The
create
is
a
succeeded,
so
I
just
wanted
to
show
that
this
is
the
status
quo
for
service
fabric,
so
they
already
support
this
Bosch,
which
is
now
SF
2.0
architecture,
provision
compliant.
K
What
another
thing
I
wanted
to
show
is
now
we
talked
about,
we
can
bring
in
new
Provisionals
very
easily,
so
what
we
can
see
is
I
will
show
you
that
any
I
asked
native,
also
native
service,
or
so
we
can
bring
in
very
easy.
So,
for
example,
I
am
taking
here,
Ally
Klaus,
Alibaba
Klaus
non-native
service,
which
is
absurd,
a
DB
service
of
RDS.
They
provide
Postgres
equal.
K
K
K
Okay,
meanwhile,
what
I
show
is
so
this
is
ally
clouds
console
and
what
we
can
see
is.
There
is
an
instance
getting
created
here
which
I
just
spawned,
and
it
has
the
instanceid
starting
with
fully
seven
sets,
which
we
just
saw
the
resource
for
so
this
takes
a
little
bit
more
time,
five
to
seven
minutes.
K
We'll
just
connect
to
this.
This
is
the
Postgres
instance,
so
we
will
just
connect
to
it
via
this
service
instance.
So,
just
one
more
thing
I
will
show
is
in
this
instance.
So,
basically,
what
we
do
is
we
create
our
internet
Internet
address
for
this
wire,
which
we
can
access
this
instance
we
create
some
more
sounds.
We
create
initial
databases,
so
this
is
all
part
of
the
create
service
instance,
which
is
done
by
a
service
fabric
and
the
parishioner
integrated
with
it.
K
Okay,
so
just
just
to
show
you,
you
can
see
that
this
is
the
user
that
has
been
created
here
so
yeah.
This
is
how
then
we
also
have
the
unbind
somebody
service
integrated
with
it,
and
this
is
the
easiest.
Maybe
we
can
bring
in
any
provisional
and
meet
anything
I
asked
native
service,
another
banking
service.
J
A
Excellent
excellent.
Thank
you
so
much.
We
don't
really
have
time
for
question,
but
I
will
ask
one
thing,
which
is:
where
would
people
go
to
find
more
information,
because
obviously
people
will
watch
this
on
the
video?
Can
you
tell
them
where
to
go
to
they
go
to
the
service
fabric
repository
on
github
and
they'll,
have
all
the
documentation
and
links
to
how
they
can
contribute
and
add
more
provisioners,
or
do
you
have
a
blog
post.
J
B
J
Information
should
be
github
repository,
so
you
can
check
the
retweets
or
the
documentation
link
that
we
have
as
part
of
the
repository.
So
the
repository
is
from
foundry
incubator
and
then
service
fabric
cooker.
You
could
check
out
the
wiki
and
the
API
documentation
also
that
we
have
architecture
documentation
also
that
we
have
and
also,
if
you
have
any
questions
or
issues,
we
feel
to
raise
issues
of
or
even
directly,
contact
us
of
the
psycho
state.
Exo.
A
A
L
Yeah
so,
as
Max
mentioned,
my
name
is
nirvash
a
engineer
on
the
CFC
RIT
team.
It's
probably
been
a
while,
since
you've
heard
from
us,
so
we
thought
we'd
come
over
to
this
cow
call
on
do
a
little
update
on
what
we've
been
doing
and
Before
we
jump
into
that
like
are
we
so
the
CFC?
Our
team
stands
for
the
cuff
Rodrick
container
runtime.
You
may
have
noticed
previously
as
Kubo,
and
our
main
responsibility
is
to
just
package
kubernetes
it's
dependencies
into
a
Bosch
release.
L
So
the
main
deliverable
that
we're
offering
is
that
if
you're,
a
Bosch
operator
that
wants
a
kubernetes
cluster
you'll,
take
our
Bosch
release,
deploy
it
and
it
provides
a
nice
reproducible
deployments
for
Cades.
So
it's
used
as
a
foundational
layer
that
other
teams
are
using
for
creating
things
on
top
of
our
team.
Members
are
mainly
in
the
dublin
ireland
office,
as
well
as
some
of
us
here
in
the
US
I
work
out
of
the
San
Francisco
office
and
I
work
with
a
vmware
engineer,
who's
working
remotely
from
portland.
So
we're
a
dis,
readed
team.
L
But
let's
walk
through
a
little
bit
of
like
kind
of
the
beginnings
of
CFC,
are
so
back
in
early
2017,
I
kind
of
want
to
remind
like
we're,
claw,
foundry
and
the
open
source
community
where
it
was
Bosch.
V2
has
just
become
like
sort
of
a
thing
and
and
was
ramping
up
bbl
itself.
Bosch
bootloader
was
starting
to
create
support
for
creating
directors
with
Boch
equipment
and
create
ends,
and
there
wasn't
really
quite
Boston
s.
Yet
it
was
still
an
idea
that
was
percolating
and
cred.
L
We
knew
that
kubernetes
was,
you
know
had
been
around
for
three
years
or
so
at
that
point
or
maybe
around
just
two
years
or
so,
and
we
wanted
to
kind
of
get
the
ball
rolling
and
offer
something
at
that.
But
on
the
Boche
platform
for
kubernetes,
so
eventually
we
figured
it
out.
We
called
it
Kubo,
which
was
kubernetes
plus
Bosh,
and
we
created
some
artifacts
that
people
could
use
Google,
really
scooped
a
deployment
even
created
a
little
trying
to
personify
who
we
were.
L
But
kind
of
frustrating
into
where
we
are
now
we
had
I
guess
he
could
say
a
bit
of
an
identity
crisis.
We
start
calling
ourselves
Kubo
and
started
calling
ourselves
the
confounded
container,
runtime
team
and
that
really
forced
us
to
think
about
who
are
the
users
of
the
specialists
that
we've
built
so
quickly
and
for
a
while
we
thought
what
we
were
doing
is
providing
people
a
way
to
you
and
sell
kubernetes,
which
at
the
time
was
and
still
today,
is
a
difficult
problem
to
do.
Setting
up
your
community
so
that
they
are
those
clusters.
L
Are
production,
ready
and
able
to
upgrade,
isn't
only
not
a
solved
problem,
but
we
were
one
of
many
installers
in
the
kubernetes
community
and
still
are
like
over
60
of
them
so
by
us,
creating
tooling
and
things
that
were
abstracting.
The
Boche
layer
was
actually
doing
it
to
service
to
the
Bosch
operator,
so
we're
trying
to
use
kubernetes
on
Bosh.
So
what
we
kind
of
decided
to
do
is
rename
the
team
shift
our
focus
a
little
bit
to
be
more
Bosh
native.
L
That
I
was
talking
about
back
in
the
day
and
2017
that
were
still
maturing
they're,
really
we're
leveraging
a
lot
more
of
those
and
making
it
much
more.
Bosch
native
experience
to
use
FCR
we're.
Also
thinking
a
lot
about
how
we
can
secure
clusters,
aside
from
just
creating
reliable
reproducible,
kubernetes
clusters
on
Bosch,
we're
really
trying
to
make
our
clusters
default
by
secure
for
those
who
played
around
with
kubernetes
it's
pretty
much
kind
of
the
most
popular
workloads
or
the
most
common
ones.
L
The
other
thing
we're
really
concerned
about
is
for
people
to
be
able
to
do
backup
and
resource
of
their
clusters,
so
we're
working
with
the
platform
recovery
team
to
provide
a
bbr
solution
for
not
only
for
single
master,
which
we
have
today,
as
well
as
for
a
multi
master
set
up
for
kubernetes,
and
so
I
already
said
this.
We
probably
about
more
than
six
months
ago,
released
the
multi
master
support.
L
In
the
day,
people
were
really
nervous
about
at
CD,
especially
in
the
Cloud
Foundry
community,
but
what
CFC
our
team
did
was
create
their
own
at
CD
release,
that's
specific
for
CF
CR,
and
it
really
goes
through
all
of
the
at
CD
operator
guidelines
and
make
sure
that
we
adhere
to
that
and
does
less
of
the
orchestration
I
think
that
we
were
bitten
by
in
the
past.
So
look
out
for
more
multi
master
support
with
that.
L
Some
of
the
challenges
that
we're
kind
of
currently
facing
in
CF
CR
is
a
little
bit
of
that
technical
debt
that
I
described
where
we
were
provisioning,
our
own
box
directors
and
things
and
kind
of
abstract
nglish
away
from
the
user,
which
actually
met
just
a
lot
of
tooling
that
we
now
can
get
rid
of,
but
it's
very
deeply
integrated
in
our
CIS
and
the
way
that
we
were
kind
of
offering
too
late
for
our
product.
So
we're
paying
down
some
of
that
technical
debt.
L
It
can
be
people
who
just
want
to
cluster
and
just
want
to
want
our
default
settings,
and
then
there
could
be
some
really
experienced
folks
who
know
exactly
what
they
want
or
trying
to
build,
really
cool
things
on
top
of
it
that
want
all
of
the
features
and
flags
enabled
inside
of
the
current
nice.
Api
server
thinks
of
that
nature,
so
trying
to
find
the
right
balance
so
that
we
can
continue
to
have
those
reliable
clusters
without
people
shooting
themselves
in
the
foot.
That
kind
of
thing
and
produce
like
a
good
product.
L
L
So
it's
it's
also
for
us
just
kind
of
the
making
sure
that
the
the
conformance
tests,
which
are
kind
of
like
baseline
features,
that
should
work
in
kubernetes,
still
operate
on
the
ayahs
that
we
configure
CFC
r4.
So
we
have
some
PRS
out
there
for
a
sure.
We're
gonna
definitely
roll
those
in
very
soon
and
definitely
on
our
mind
up
next
is
the
rotating
of
certificates.
L
So
all
the
components
inside
of
kubernetes
can
be
configured
to
communicate
over
TLS
there's
a
ton
of
them
that
do
that
and,
as
a
result,
we
are
definitely
using
kinda
up
to
generate
those
certificates.
But
the
rotation
process
of
that
is
going
to
be
a
little
bit
tricky
to
provide
the
least
amount
of
downtime
on
that
on
the
API
server
around
the
control
plane
for
folks.
So
we're
going
to
try
to
think
about
a
good
strategy
for
that.
L
A
A
Is
any
attribute,
so
you
you
said,
or
actually
that
you
have
now
backup
and
restore
so
does
that
mean
that
you
periodically
backing
up
the
cluster
or
how?
How
how
is
the
data
being
I
guess
saved
like
how
often
I'm
not
sure
exactly
how
that
works?
Because
I
remember
in
in
video,
are
you
you
have
to
kind
of
configure
each
component
so
that
it
actually
does
on
you
know,
strategy
and
backup,
but
in
this
case
I
guess
you
have
to
package
all
the
different
pieces
of
a.
L
So
I
actually
had
to
be
more
specific
for
a
backup
and
restore
what
we're
really
doing
is
snapshots
of
the
at
CD
notes
that
are
supporting
they
keep
API
server
in
the
backing
store
for
it.
So
we
just
build
that
the
scripts
are
baked
into
CF
CR
release.
So
that's
someone
who
is
using
BB
are
the
CLI
can
come
in
and
take
on
whatever
cadence
stay
with
like
backups
and
then
use
that
to
restore
our
new
kubernetes
clusters
yeah.
L
A
L
Is
part
of
the
current
release,
but
that
backup
and
restore
stuff
is
just
for
single
master.
We
are
working
for
it
with
that
platform.
Recovery
team
that
builds
BB
r
to
do
support
for
multi
master
as
well,
so
they
recently
submitted
a
PR
for
that
and
we're
probably
gonna
world
at
it,
and
test
I
didn't
see
I
barely
soon.
B
A
B
L
We're
definitely
at
a
place
where
the
manifest
just
works,
so
you
can
deploy
that
there
are
some
kind
of
most
people
want.
You
know
the
cloud
provider
integrations
so
that
they
can
provision
I
as
resources
within
the
kubernetes
cluster.
So
there's
a
couple
obstacles
for
that,
but
yeah
we're
pretty
much
at
that
point
already
in
on
our
Kubo
release.
Readme
page
is
where
you
can
find
the
documentation
for
this.
A
B
C
L
We
actually
yeah
you're
right.
We
do
have
a
story
for
that
in
our
backlog
to
make
it
so
that
you
don't
have
to
do
upload
the
release
and
because
right
now,
we're
catering
towards
just
so
we're
doing
some
work
in
our
CI
cuz.
That's
easy
enough
for
us
to
do
in
the
manifest,
but,
like
I
said,
a
lot
of
our
technical
debt
comes
from
the
kind
of
weird
things
that
we
did
in
CI
that
make
it
a
little
bit
harder
and
it's
more
dev,
centric
I
guess
but
yeah.
We
do.
C
A
L
It's
certainly
outside
the
scope
of
our
team,
but
it
is
I
know
something
that
other
teams
are
working
on,
trying
to
figure
out
a
way
that
those
two
flat
four
platforms
can
coexist
and
kind
of
you
know,
so
you
don't
need
like
double
up
on
a
few
of
those
things.
So
it's
it's
in
the
works.
I've
seen
other
teams
working
on
it,
but
outside
the
scope
course.
I
L
I
think
that
was
the
same
question
just
now,
but
yeah
outside
the
scope
of
our
team,
but
I
know
that
there
is
another
team
that
is
trying
to
strategically
figure
that
out
so
that
it's
an
easier
experience
to
target
either.
One
I
think
people
are
interested
in
having
you
know
your
services
created
by
kubernetes,
but
maybe
your
apps
and
services,
but
your
apps
from
Cloud
Foundry
bind
it
over
to
services
and
kubernetes.
So
there
are
teams
that
are
working
on
that
and
we're
just
kind
of
there
for
for
questions
on
just
CFC
our
stuff.
B
B
A
Let's
see
if
there's
no
question
I'll
give
a
free
plug
for
Raschi
she's
too
shy
to
say,
but
he
will
be
in
Basel
Art
Basel
and
they
have
a
talk.
I
think
they're
gonna
go
to
even
more
details
than
this,
and
of
course
you
can,
you
know,
poke
her
and
ask
more
question
technical
stuff.
She's
gonna,
be
there
and
I
think
the
whole
team
too
so
come.
A
More
reasons
to
come
to
vows
all
right,
I'm
trying
to
get
Demetri
to
come
also.
So,
if
you
have
questions
for
him,
we'll
see
what
we
can
do,
anyways,
what
are
you
doing?
I
know,
I,
know
and
I.
Don't
know
anything.
I
just
get
people
to
come
to
battle
all
right.
Take
care.
Everybody!
That's
great,
like.