►
From YouTube: Cloud Foundry Community Advisory Board Call [June 2020]
A
A
So
cab
call
we've
got
summit
next
week,
probably
starting
off
with
the
cff
highlights
something
next
week.
You
probably
all
know
about
this.
It's
very
exciting
these
schedule
is
up.
Looks
like
we've
got
some
great
breakout
sessions.
We
also
have
afternoon
hands-on
labs,
which
we
added
a
little
bit
later
in
the
schedule.
If
you
haven't
taken
a
look
at
those
I
highly
recommend
it,
we've
got
some
I'll
briefly
talk
about
those
because
I
don't
think
that's
been
discussed
here
yet,
but
we've
got
some
some
great
labs.
We've
got
a
CF
for
Kate's
deployment.
A
Lab
we've
got
a
coop
CF
deployment
lab.
We've
got
a
lab
from
the
CLI
team
on
new
features
in
v7,
which
should
be
should
be
pretty
pretty
awesome
and
then
Steve
Payne
Burke
will
be
running
through
try
cloud
foundry
for
for
total
newbies,
so
it
nice
nice
little
curriculum,
oh
and
then,
of
course,
project
glue
on
from
Stark
and
Wayne,
which
you'll
be
seeing
a
presentation
on
during
this
call.
A
James
you're
good
to
go
for
that
today,
I'm
sure
you
are
cool.
If
you
anyone
have
any
questions
about
summit.
Obviously
I
can
answer
questions
about
the
labs
I.
If
I
can't
necessarily
answer
all
of
your
questions
about
summit,
because
this
is
the
first
time
we've
done
into
that
like
this,
but
anyone
have
any
questions
or
thoughts
on
summit.
A
You
I
assumed
that
as
well,
yes,
I,
don't
think
that's
gated
at
any
point,
I
assume
you
can
sign
up
an
hour
into
the
event.
I
can
I'll
double-check.
That,
though,
because
that
seems
so
as
good
to
ask
questions
early
and
of
course,
if
anyone
does
have
questions
that
pop
up
just
reach
out
in
the
summit
slack
channel
in
kleinereise
luck,
and
hopefully
we
can
get
all
of
your
questions
answered
there
first
time
doing
a
virtual
event,
of
course.
A
So
you
know
it'll
be
interesting,
but
I'm
looking
forward
to
it
not
as
much
as
I
would,
if
we
were
all
gonna,
be
hanging
out
in
Austin
or
some
other
locale,
but
it
should
be
good
with
that.
That's
all
we've
got
from
the
cff,
so
I
will
move
on
to
PNC
project
highlights
Eric.
You
want
to
talk
about
some
things
happen
in
the
app
runtime
PMC
yeah.
C
C
Think
at
this
point
now
you
can
bring
your
own
certs
for
ingress
gateways
and
that
and
then
qcf
has
released
I
think
latest
version
is
2
to
2
there
and
I
know
they've
been
getting
some
bug
fixes
into
that,
and
they're
also
focused
now
on
incorporating
the
UAA's
deployment
artifacts
tickets
as
part
of
that
set
of
resources.
One
thing
also
that's
coming
up
is
in
fact
I.
C
Some
other
component
team
highlights
AIP
team
is
I,
think
just
finishing
up
work
on
kind
of
the
next
phase
of
their
integration
of
K
pack.
So
this
is
going
to
allow
them
to
integrate
the
aikido,
build
packs
as
more
granular
build
packs
and
to
see
affricates
as
part
of
that
set
of
bill
pact
resources.
C
On
a
little
bit
more
the
meta
front,
we've
had
some
new
leadership,
come
into
the
UA,
a
team
and
they're
currently
reevaluating
the
roadmap
and
how
they
can
deal
with
more
open
in
terms
of
collaboration
on
that
and
on
development.
So
I
think
the
mostly
been
working
with
some
of
the
folks
from
sa
key
that
have
also
been
strong
collaborators
on
that
project
to
drive
that
forward.
C
A
couple
other
highlights
I'd
mentioned
that
ireenie
is
containing
on
getting
apt
tasks
to
run
that's
still
in
flight,
but
they've
also
been
exploring
some
initial
CRD
representations
of
the
arena.
Api
contents
so
figuring
out
how
that
might
work
and
move
more
to
the
domain
model
for
CF
frigates
and
keep
CF
into
the
case
API
itself.
A
couple
of
the
things
networking
team
is
also
finishing
up.
A
A
A
B
B
Understand
the
there's
posting
a
while
back,
probably
from
Roopa
asking
for
interested
folks
in
the
foundation
to
collaborate
on.
What's
that
next
thing,
they're
still
looking
for
for
additional
help
and
interested
than
that
okay
I'll
see
if
I
can
dig
up
the
link
to
the
email
posts
that
that
describes
that
all
right.
Thank
you.
A
B
D
Everybody
everybody
hear
me
just
a
bomb
all
right.
So
if
you
don't
know
me,
my
name
is
James
Hunt
I'm,
the
director
of
R&D,
here
at
Stark
and
Wayne,
which
means
I,
basically
ask
questions
and
then
write
code
to
answer
those
questions
all
day.
Every
day
dream
job
super
glad
to
have
it.
One
of
the
questions
that
was
asked
about
a
month
ago
was
we're
talking
just
water-cooler
virtual
chat.
D
If
you
go
to
star,
can
wyndcomm
slash
glue
on
this
is
kind
of
the
landing
page
for
the
whole
open-source
thing
kind
of
gives
you
a
teaser
of
what
we're
trying
to
do
and
what
the
idea
was
and
then
a
link
off
to
the
video.
But
we
don't
have
time
to
watch
this.
Video
and
I
wouldn't
subject
you
to
that.
D
Without
your
consent,
so
instead
I'm
gonna,
just
kind
of
quickly
run
through
the
slides
just
to
get
an
overview
of
the
architecture
of
how
this
thing
is
put
together
and
then
we'll
take
a
look
at
a
Bosch
that
I
deployed
this
morning
into
my
Buffalo
base
of
yeah
vSphere
lab
and
then
we'll
deploy
something
small
on
top
of
it.
That
won't
take
three
hours
to
to
kick
out.
D
So,
as
I
said,
the
the
premise
of
gluon
is
what,
if
we
could
cheap
Bechtel
instead
of
Bosch
and
by
that
I
mean
let's
have
unified
tooling
for
managing
all
of
our
infrastructure,
whether
that's
containerized,
whether
it's
vm
bound
or
or
what-have-you,
and
to
that
end
we
have
to
create
some
new
custom
resource
types.
The
first
custom
resource
type
is
a
Bosch
deployment.
D
Without
any
conflicts,
a
box
deployment
is
modeled
as
what
kind
of
star
deployment
are
we
deploying
what
ops
files
and
what
VARs
are
we
specifying
for
those
ops
files
and
for
that
base
deployment,
the
resource
itself
and
the
controller
manages
the
state
and
the
VARs
store
things
that
we
currently
kind
of
defer
to
the
operator
to
say:
hey,
I
generated
a
bunch
of
certs
and
some
passwords
and
some
other
things
you
need
to
put
these
safely
securely
somewhere.
We
actually
just
shove
those
back
into
kubernetes
after
the
deployment
is
done.
D
And
finally,
we
have
a
boss
config,
which
rolls
up
the
cloud
and
runtime
configs
and
lets
us
supply.
Basically,
the
whole
of
our
boss
configuration
landscape
in
terms
that
kubernetes
api
eyes
can
understand
and
then
glue
on
and
lead
back
into.
Bosch
commands
this
is
declarative
as
opposed
to
imperative,
whereas
right
now,
if
you're,
deploying
Bosch
based
infrastructure,
there's
a
lot
of
do
this,
then
wait,
then,
when
that's
done,
do
this
and
wait
and
for
God's
sakes
don't
do
the
second
set
twice,
because
it
will
fail
the
second
time,
so
it's
not
independent
it
impotent.
D
It
is
imperative.
This
is
all
declarative
and
impotent.
It's
managed
for
you,
so
it
keeps
track
of
your
VARs
that
you've
specified
for
your
deployments.
It
keeps
track
of
who's
where
and
the
chain,
for
example,
in
the
video
demo,
I
actually
applied.
All
of
the
configure
once
and
the
gluon
controller
says
so.
I
have
a
deployment
for
the
Bosch
director.
D
I
have
a
deployment
for
Cloud
Foundry
I
have
a
stem
cell
and
I
have
a
cloud
config
and
a
runtime
config,
and
it
figures
out
the
order
of
operations
for
all
of
those
things
to
happen
by
saying,
essentially,
there
are
implicit
and
explicit
dependencies
and
we'll
see
both
of
those
as
we
get
into
the
yeah
muls.
It
all
starts
with
the
actual
Bosch
director
Bosch
deployments
of
demo
Bosch.
D
The
stem
cell
can't
get
uploaded
until
the
director
is
up
the
cloud
configured
runtime
configs
also
can't
be
applied
until
the
director
HTTP
endpoint
is
available
and
we
can't
really
start
the
deployment
of
whatever
we're
deploying
on
top
of
that
director
until
all
of
those
dependencies
are
met
and,
and
the
idea
is
rather
than
burden
operators
with
that
dependency
graph.
That
list
of
what
needs
to
go
when
and
what
order
we
have
the
knowledge
and
the
information
at
our
fingertips
to
figure
that
out.
So,
let's
just
let
the
gluon
controller
do
that.
D
If
we
take
a
look
we'll
start
with
our
director,
so
the
director
yeah
mol,
this
is
actually
the
most
complicated
we're
gonna
see
today.
It's
a
bunch
of
different
things,
all
kind
of
mashed
together,
so
we're
gonna
start
with
a
config
map,
and
this
is
how
we
specify
our
virus
files.
So
if
you're
used
to
doing
a
box
create
end
V,
this
me
that
me
etc.
You
can
now
put
those
variables
in
either
config
maps
or
secrets.
D
So
this
is
a
config
map
that
lists
out
all
of
the
quote-unquote
public
information
that
I
can
show
you
on
a
recorded
and
sent
to
the
internet.
Video
talk
about
our
Buffalo
lab
things
like
we're
inside
a
V
Center.
Am
I
deploying
what's
the
networking
look
like
where
do
I
want
to
store
all
these
assets
inside
of
the
V
Center?
And
this
is
something
that,
because
that
we'll
see
when
we
get
into
the
VARs,
you
can
reuse
these
config
maps
across
multiple
Bosch
directors
or
other
deployments.
D
If
you
have
lots
of
deployments
that
are
the
same,
you
can
extract
the
common
information,
the
common
VARs
out
of
individual
scripts
and
put
them
in
one
config
map
and
then
we'll
map
them
back
in
on
the
Boche
deployment
side,
which
is
right
here,
so
the
boss
deployment
repo
are
kind.
This
is
by
the
way,
a
v1
alpha,
we're
still
playing
with
this.
This
is
subject
to
change
at
a
moment's
notice.
D
The
boss
deployment
represents
our
boss
deployment.
Surprisingly
enough,
the
name
will
be
the
name
that
we
reference
it
in
other
parts
of
the
system,
so
I'm
gonna
create
a
Bosch
director
called
cab.
One
I'm
gonna
give
it
some
arbitrary
labels,
so
I
can
find
it
with
my
kubernetes
inventory
and
then
diving
deep
into
the
spec
we
specify.
Where
are
we
pulling
our
deployment
yells
from
so
this
is
upstream
official
Bosch
deployment,
which
branch
tag
or
commit
--is--?
Do
you
want
to
deploy
this
more
useful
for
CF
deployment
where
they
tag
versions?
D
Next
up
is
the
list
of
ops
files
I'm
going
to
throw
as
OH
arguments
to
my
boss,
create
Enver
my
boss,
deploy
because
I'm
on
vSphere
I'm
gonna
activate
the
vSphere
cpi-m?
Oh
I'm,
not
gonna,
throw
you
a
a
orc
red
hub
on
here,
mostly
out
of
interest
in
speed,
and
then
we
get
into
VARs,
which
is
where
we
start
mapping
things
back
in
from
all
the
disparate
locations.
So
we
saw
the
cab
demo
boss,
config
map
that
had
all
of
our
V
Center
stuff.
We
also
have
a
secret.
D
This
is
my
my
creds
secret
for
getting
into
vSphere
V
Center,
as
my
deployer
user,
which
again
because
this
is
a
recorded,
call
I'm
not
going
to
show
you,
but
we
map
just
the
user
key
and
password
key
from
this
secret
in
the
current
namespace
into
the
VAR
v,
Center
underscore
user
and
V
Center
underscore
password.
This
will
override
anything,
that's
specified
in
config
map
and
you
can
specify
as
many
of
these
as
you
want.
D
So
if
we
wanted
I
could
do
this,
I'm
thinking,
big
map
name,
all
Bosh's
I
could
say
config
map,
let's
grab
these
fear-based,
Bosh's,
etc,
and
then
I
can
kind
of
hierarchically
roll
out
my
config
with
overrides
at
the
various
levels.
So
the
vSphere
based
boss's
config
map
might
have
all
of
the
information
about
my
buffalo
lab,
vSphere,
environment
and
then
cab.
Demo.
Boss
is
only
going
to
specify
things
specific
to
this
boss
director
this
pattern.
We
use
it
a
lot
in
the
our
Genesis
deployment
product.
I
works
fairly.
D
Well,
it
gives
us
a
lot
of
flexibility
and
we've
kind
of
mapped
that
over
into
gluon.
Finally,
you
can
just
override
individuals.
So
if
you
don't
want
to
bother
with
a
config
map
or
a
secret,
you
can
just
embed
a
literal.
This
will
be
the
internal
IP
equals
ten
dot,
128,
that's
16,
dot,
2
or
1,
and
that
will
overwrite
anything
that
has
come
before
when
we
apply
this
which
I
applied
this.
This
morning,
we
look
at
our
boss
deployments.
B
D
We
have
a
whole
bunch
of
stuff
happening
in
the
environment.
We
have
a
state
file
or
a
state
persistent
volume
claim
for
the
boss,
create
and
state
that
JSON
file,
so
that
we
can
keep
track
of
VMC,
IDs
and
stem-cell
information.
We
also
have
a
fair
amount
of
other
secrets.
In
fact,
let's
just
do
OK
again
cm
our
caps,
one
state
and
our
cab
demo,
much
that's
old
cab.
D
One
state
will
actually
hold
the
state
file,
that's
from
the
PVC,
so
that
we
can
pull
it
back
later
if
we
accidentally
delete
the
PVC,
but
the
real
meat
of
it
is
going
to
be
in
that
job.
So
if
we
look
at
the
jobs
and
then
we
look
at
the
pods
the
job,
this
is
we're
now
into
standard
kubernetes
territory.
D
So
here
we're
saying,
boss
create
and
if
we're
gonna
do
box
yamo
with
the
state
file
into
our
volume
of
our
store,
also
into
our
volume,
and
then
we're
pulling
all
of
our
variables
from
the
environment
variables
that
are
prefixed
with
glue
on
that's
really
just
a
implementation
detail,
but,
as
you
can
see,
we
go
through.
This
is
stock
output
from
a
Bosch
create
end.
D
We
do
our
deploy.
We
spin
up
our
VM,
we
wait
for
the
agent
we
compile
the
compile,
we
compile
the
compile
and
then
we're
all
done,
and
the
last
bit
that's
specific
to
us.
The
gluon
is
holding
that
information
out
of
the
state
file
and
out
of
the
VAR
store
and
stuffing
it
in
kubernetes
things.
Specifically,
a
config
map
called
cab.
One
state
and
a
secrets
called
cab:
one
secrets
what
that
cab
one
secret
is
going
to
show
us
is
all
of
our
all
the
interesting
information
about.
D
What's
the
URL,
the
user
name
and
the
password
that
were
generated
by
Bosch,
create
and
configure
server
implementation,
and
we
can
use
that
in
a
custom
CLI
that
I
wrote
called
gluon
where
you
pass
it
the
name
of
the
director
which
causes
it
to
use
the
currently
targeted,
cube
config
with
Quebec
tile
to
go,
pull
those
four
information
pieces
and
when
we
run
it,
it
will
connect
up
to
that
Bosch
director
and
so
basically,
let
me
run
Bosch
commands.
So
I
can
take
a
look
at
depths.
I
can
take
a
look
at
cloud
configs.
D
D
We
say
this
is
the
cloud
config
and
we're
going
to
put
it
on
the
cab
one,
and
then
everything
from
line
11
on
down
is
just
a
gamal
cloud
config
that
I
shoved
into
the
kubernetes
yamo
as
a
stream,
so
the
pipe
makes
it
a
multi-line
string.
This
will
also
turn
into
a
job
inside
of
kubernetes
and
it
will
have
a
dependency
on
that
cab,
one
boss
director.
So
if
we
look
at
jobs,
we
have
an
update
cloud
config
that
oh.
D
D
You'll
see
this
is
just
standard
Bosch
running
an
upload
stem
cell
and
then,
finally,
the
once
we
have
a
boss
director.
Once
we
have
stem
cell
once
you
have
cloud
and
optionally
a
runtime
config,
we
can
create
another
boss,
deployment
that
uses
our
cab
one
director,
but
with
a
different
boss,
deployment
repo.
This
is
box
deployment,
which
is
something
I
spun
up
on
github.
It
deploys
nothing.
It
gives
me
VMs
it's
great
for
demos
because
it
doesn't
compile
packages
and
I've
got
a
couple
of
ops
files,
so
you're
using
the
same.
D
D
Do
the
right
thing:
babysit,
the
jobs,
pull
the
creds
out
a
side
effect
is
because
we
have
all
the
creds
to
talk
to
the
boss
director,
and
we
know
that
that
boss,
director
is
called
cab
one.
That's
where
gluon
will
pull
the
creds
when
it
goes
to
do
the
boss
deploy
it
will
actually
mount
the
secret
into
the
container.
That's
gonna,
do
the
job
run
and
then
I
never
actually
have
to
know
what
my
passwords
are.
They're
all
managed
in
kubernetes.
D
D
Did
you
do
this?
Well,
so
Troy
I'm
in
R&D,
so
I
have
to
do
both
R
and
D,
and
sometimes
I
don't
have
any
ideas.
The
main
reason
I
did.
This
was
C
effort.
Aids
is
coming,
Irina
is
here,
cube,
CF
is
getting
better
every
day,
but
we
have
a
lot
of
people
we
talked
to
who
still
want
to
run
cloud
foundry
on
VMs
and
will
for
a
fair
amount
of
time.
D
As
long
as
there's
support
and
one
of
the
things
that
this
architecture
allows
them
to
do
is
to
move
forward
with
the
kubernetes
native
strategy,
while
still
maintaining
their
legacy.
Bosch
environments,
our
legacy
VM
based
cloud
foundries
with
additional
automation
that
you
get
from
being
able
to
create
a
declarative
version
of
your
Bosch
infrastructure,
I've.
D
If
you
want
to
upload
new
stem
cells
every
time
they
pop
now,
all
you
have
to
do
if
you've
got
the
machinery
that
looks
at
bosch,
dot,
io
and
pull
stuff
down,
you
can
look
at
and
just
create
these
stem
cell
things
mechanically,
apply
them
to
the
Cades
cluster
and
then
be
done.
Let
the
kubernetes
Machinery
handle
the
rest
of
it
and,
as
I
mentioned,
the
secrets
are
pivotal
to
figuring
out
how
to
talk
to
things,
but
the
secrets
can
be
they
don't
have
to
come
from
glue
on
right.
D
You
can
just
put
the
user
name/password,
see
a
cert
and
endpoint
URL
in
a
secret
and
then
start
using
it
from
the
gluon
controller,
even
if
that
was
deployed
via
create
and
four
years
ago,
or
god
forbid,
mikromasch
seven
years
ago,
if
you,
if
you
have
the
creds,
you
can
kind
of
adopt
gluon
and
move
forward
and
gain
all
that
automation.
Experience.
D
B
D
The
docker
Damon
and
it's
hitting
of
vSphere
that's
on
a
VPN.
The
gluon,
hands-on
lab
will
be
doing
the
latter.
Half
of
this
exerciser.
We
actually
start
with
a
Bosch
director
and
we
apply
cloud
config
and
stem
cells
and
then
do
some
deployments
of
things
to
try
and
as
we
also
look
into
what
the
manager
is
doing
and
how
the
controller
works,
but
that's
all
on
GCP.
So
we
use
the
gke
to
run
glue
on
to
then
bootstrap,
our
boss,
director
into
GCPs
compute
engine,
which
was
pretty
nifty.
You.
D
You
do
because
the
boss,
well,
you
do
if
you're
gonna
have
bosch,
do
the
bosch
create
end
stuff?
If
you're
gonna
do
the
from
nothing
implement
of
something
we
do
need
a
PVC.
We
need
a
storage
class
to
fulfill
the
PVC
we
make
for
the
state
file.
In
the
theory
we
don't
have
to
do
that.
That
was
a
design
decision
and
if
that
becomes
problematic,
we
might
change
that.
I
just
wanted
to
have
a
if
the
pod
died
and
it
has
died
repeatedly
because
of
bad
configuration.
D
And
thought
it
was
silly
and
then
answered
it
him
and
a
whole
bug.
We
then
spent
the
next
Chris
Weibull
and
I
spent
the
next
like
two
days
just
throwing
out
random
things
like
now
that
we
have
this.
We
can
do
this
this
this
this
and
this
it's
one
of
those
it's
a
small
tool,
but
it's
a
very
sharp
tool.
Yeah
I
think
I'm
a
little
biased
because
I
wrote
it,
but.
D
Anyway,
like
I
said
the
video
out
there
is
me
for
half
an
hour
or
more
rambling,
more
detailed,
there's
a
funny
part
about
28
minutes
in.
If
you
can
make
it
that
long.
This
is
on
github.
It
is
open
source,
it
is
MIT,
I
just
finished,
putting
up
or
will
be
putting
up
hope
that
we're
there,
the
installation
instructions
I'm,
trying
to
figure
out
what
we
need
to
do
for
dev
and
other
things.
D
A
Awesome,
thank
you.
James
yeah
super
cool
projects,
looking
forward
to
the
lab
next
week
and
yeah
any
other
questions
or
comments
before
you
wrap
this
up,
whoa
all
right.
Well,
thank
you
all
for
joining
looking
forward
to
seeing
you
all
at
Summit
next
week,
please
go
register.
Please
go
check
out
the
schedule,
particularly
the
hands-on
labs
and
see
you.
Then
thanks
so
much.