►
From YouTube: Cloud Foundry Community Advisory Board [Sept 2020]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Okay
welcome
everyone.
Thank
you
for
attending
this
cloud.
Foundry
community
advisory
board
meeting
for
wednesday
september
16th
we'll
go
through
the
usual
starting
with
I'm
hoping
chip
to
give
us
some
cloud.
Foundry
foundation
highlights
and
updates,
including
the
cfu
summit,
which
is
coming
very
soon.
C
Yeah,
so
real
quick,
I
mean
this
is
sort
of
the
usual
thing
right.
We've
got
a
summit
coming
up,
it's
going
to
be
fun
october,
21st
and
22nd.
The
north
american
event
was
was
was
two
days
but
kind
of
one
track.
This
is
going
to
be
two
days
and
with
two
tracks
plus
a
lot
of
side
activities,
so
we're
really
excited
about
it.
We
think
that
we
learned
a
lot
from
the
first
virtual
summit
that
we
held
earlier
in
the
year.
C
We've
picked
a
slightly
different
platform
that
should
be
a
little
bit
more
modern,
but
the
same
basic
thing
applies.
The
the
word
of
the
event
is
all
about
collaboration,
so
we're
trying
to
find
ways
to
help
the
the
amazing
people
who
have
volunteered
to
be
the
track
chairs,
look
through
the
the
proposals
to
make
sure
that
you
know
really
any
of
the
as
many
of
the
talks
as
possible.
Help
inspire
collaboration
help
inspire
kind
of
follow
on
discussion.
You
know
trigger
good.
C
You
know:
q,
a
sessions
post,
the
the
pre-recorded
part
of
their
talks,
so
hopefully
it
will
improve
from
what
we
did
in
north
america.
Anybody
who's
a
contributor-
and
this
is
kind
of
a
very
vague
purposefully
statement
about
what
is
a
contributor
right.
It
doesn't
require
code
contribution
if
you
have
help
with
docs.
C
The
second
thing
is:
lots
of
you
touch
end
users.
You
know
those
of
you
that
work
for
a
member
who
may
have
a
bit
more
of
a
formal
process
for
for
sharing
things.
Like
our
user
survey,
you
may
very
well
be
covered,
but
if
you,
if
you're
touching
end
users,
customers
and
otherwise,
please
help
us
get
the
word
out
about
the
user
survey.
That
data
is
actually,
I
think,
really
important
right.
C
So
we
always
try
to
collect
something
that
provides
meaningful
information
that
could
be
used
by
the
project
itself
to
help
it
understand
the
the
user
base
in
a
bit
more
of
a
quantitative
way
that
spans,
you
know
again
the
information
that
you
might
get.
If
you
work
for
one
of
the
vendors
you
know
through
your
own
product
management
processes
right
because
this
will
hopefully
give
a
bit
of
a
more
global
view.
C
So
those
are
the
two
I
don't
know.
I
would
say
major
focuses
that
I
wanted
to
to
share
here
at
the
at
the
cab
call
from
the
cff
just
for
fun.
I
tossed
out
there
an
interesting
tweet
from
from
eddie
who's,
the
he
is
the
program
manager
for
cloud.gov,
which
is
the
u.s
federal
government's
cloud
foundry
based
service.
C
You
know
we
were
having
a
discussion
about
air
quality
and
one
of
the
one
of
the
highest
use
sites.
Right
now,
out
of
all
the
us,
federal
government
properties
is
airnow.gov,
which
is
in
fact
posted
on
cloud.gov.
C
D
C
Absolutely
I
mean,
I
think,
there's
some
I'm
pretty
sure,
there's
some
caching
in
front
of
it,
but,
but
still
we
should
be
proud
about
that.
So
nice
work
all
of
everybody.
Who's
worked
on
cloud
foundry.
B
It's
a
tremendous
kudos
to
see
something
that
people
are
really
depending
on
right
now
being
served
up
by
cloud
foundry.
I
was
very
proud
when
I
saw
that
that
that
tweet
okay,
if
there's
nothing
else
from
the
foundation,
let's
kick
things
off
with
you're,
always
comprehensive
pmc
updates
from
from
app
runtime
to
start
with.
D
Hey
thanks,
trey
yeah
sure,
so,
as
you
can
see,
we've
got
a
few
highlights
here
across
the
various
projects.
So
one
of
those
that's
maybe
the
most
visible,
is
new
minor
versions
of
both
the
v6
and
v7
cli.
I
think
this
is
going
to
be.
This
is
intended
to
be
the
last
version
of
v6
to
be
released.
That's
addressing
some
bugs
in
a
few
features,
and
so
my
understanding
is
from
now
on.
The
cli
team
is
going
to
be
releasing
only
major
version,
seven
updates
and
then.
B
Quick
question
is
there?
Is
there
the
possibility
that
we
would
still
see
a
critical
security
things
if
they
should
appear.
D
I
don't
know
for
sure,
but
given
given
our
history
of
security,
I
would
imagine
that's
still
plausibly
in
scope.
That
might
be
something
we
could
check
with
and
we
can
probably.
D
Yeah,
I
can
see
those
even
being
just
patched
versions
on
this
latest
version:
yeah,
absolutely
yeah,
yeah
yeah.
So
thanks
ray
good
question
and
then
we've
got
incremental
updates
from
both
cfrigates
and
cubecf,
with
some
nice
updates.
I
think,
coincidentally,
both
relating
to
base
image
stack
support.
D
We
have
the
basic
functionality
of
updates
working
in
cf
for
gates
now
and
there's
some
really
interesting,
new
multi-stack
support
in
qcf
and
then
yeah
continuing
some
of
the
project
updates
cappy's
been
doing
some
work
recently
to
move
even
more
content
out
of
the
traditional
blob
store,
object,
store,
they've
used
for
files
and
into
the
oci
image
registry
in
cfrage,
so
they're
moving
the
assembled
at
bits
packages
into
that
registry.
D
So
at
this
point
the
only
thing
that
the
blobster
will
contain
is
the
cache
of
files
for
resource
matching,
so
they're
thinking
about
how
they
want
to
address
that
in
the
future.
But
it's
looking
like
an
in
the
medium
term
that
dependency
is
going
to
go
away
entirely
from
the
set
of
resources
that
you
need
to
use
to
run
cf
for
kids
with
happy
containerized
that
way,
relying
on
kpak
and
some
some
other
nice.
D
Things
that
have
closed
out
irene
had
been
doing
a
track
of
work
on
supporting
application
tasks
that
went
off
tasks
and
that's
all
now
complete
and
in
latest
versions
of
irene
and
they've
also
been
doing
some
work
to
inject
the
cf
instance
index
environment
variable
into
application
containers
when
they're
running
as
pods
on
kubernetes.
This
is
something
that
you
know.
D
We
we've
generally
regarded
it
as
a
somewhat
questionable
practice,
doing
any
kind
of
reliance
on
that,
but
we
do
know
that
there
are
some
libraries
that
are
going
to
look
for
whether
they're
index
zero
or
not
to
optimize
running
their
database
migrations
when
they're
bootstrapping.
That
kind
of
thing
and
then
updates
from
a
couple
other
teams.
D
Networking
has
continued
to
do
some
work
on
scalability
of
istio
and
their
interactions
with
it,
and
also
now
that
we
have
this
route
crd,
serving
as
an
intermediate
layer
in
the
architecture
they're
exploring
the
possibility
of
plugging
in
some
other
ingress
systems
in
behind
it,
such
as
contour,
and
then
also
logging
in
metrics
they've,
been
continuing
to
do
some
work
to
improve
the
performance
and
scalability
of
the
log
cache
component
as
they've
uncovered
some
ways
in
which
it's
somewhat
inefficient
at.
D
A
Hello,
we're
still
just
working
through
stuff,
a
quick
note
to
the
community
that
there
was
a
high
cve
on
the
linux
kernel.
So
we
have
a
new
stem
cell
that
came
out
of
last
week.
So
if
you
are
planning
to
do
a
stem
cell
bump,
this
is
a
good
one
to
bump
up
to
already
and
other
than
that,
we're
just
continuing
with
getting
some
more
releases
ready,
but
nothing
yet
to
announce.
A
B
Likewise,
I
have
very
short
updates
from
the
extensions
pmc
because
we
haven't
met
in
a
long
time
and
I
haven't
yet
gathered
everyone
together.
So
we
have
a
meeting
coming
up,
which
I
should
announce
now
11
a.m,
pacific
on
september,
the
28th-
and
I
will
try
and
be
reminding
all
the
project
leads
to
at
least
give
me
updates
that.
B
So
then,
the
following
cab
call
I'll
have
more
for
you,
but
I
do
because
I
talk
to
the
stratos
team
all
the
time
they
have
two
releases
coming
up
and
I
don't
want
to
steal
cf
marketing's
thunder,
but
there
is
something
special
coming
with
4.2
chip.
Can
I
can
I
let
the
folks
know.
A
B
Of
course,
okay,
so
souza
has
been
maintaining
extensions
for
kubernetes
in
in
our
downstream
release
of
stratos
for
a
while,
we
are
donating
those
to
the
foundation
as
a
4.2,
which
should
be
just
before
the
cf
summit.
You
heard
it
here
first,
and
so
this
exposes
a
whole
bunch
of
kubernetes
features
that
you
can
that
you
can
try
out.
C
Yeah
that
should
be
neat
hey
before
you
move
on.
I
actually
want
to
interject
here.
I
realized
yui's
amazing
because
she
she
caught
something
that
I
forgot
in
addition
to
asking
for
help
with
the
user
survey.
The
other
thing
that
we
all
want
you
to
spend
time,
thinking
about
and
then
typing
and
submitting
forms
for
us
is
the
community
award
nominations.
C
So
I
toss
that
into
the
notes
up
above
there's
a
there's,
a
post,
that
you
can
click
through
to
give
us
some
nominations,
but
this
is
an
opportunity
to
nominate
your
peers
nominate.
You
know
the
people
you've
worked
with
that
have
done
really
interesting
things
for
the
community,
whether
it's
as
a
contributor,
whether
it's
you
know
an
awesome,
end
user
story,
whether
it's
someone
who's
done
a
really
good
job
advocating
for
the
project.
That's
always
a
lot
of
fun,
and
so
please
take
some
time.
B
Okay,
I
did
some
last
minute
scrambling
so
that
we'd
have
some
presentations
to
talk
about
now.
I
saw
some
really
good
ones
yesterday
in
the
kubernetes
sig
meeting
and
asked
a
couple
of
those
folks
if
they
would
like
to
present
today
for
for
a
wider
and
slightly
different
audience.
B
E
Yeah,
so
I'm
actually
also
here
with
paul,
who
I
co-presented
with
yesterday,
right
and
paul-
is
currently
the
interim
pm
for
relent
driving
the
cf
for
kate's
initiative
and
yeah
he's
probably
going
to
give
an
introduction.
F
Thanks
bialy
yeah,
so
in
the
bosch
based
cf
deployment,
application
stack
updates
were
provided
to
the
platform
through
a
new
cf
linux
boss
releases,
as
we
all
know
that
diego
then
acted
upon
using
the
base
file
system
contained
therein
to
patch
all
the
pushed
apps
simultaneously.
F
However,
in
this
new
kate's
world
we
don't
have
a
bosch
and
there
we.
Therefore
we
don't
have
a
cf
linux,
boss,
release
or
in
fact
a
diego.
So
how
will
these
application
stack
updates
occur?
We'd,
like
to
demo
recently
completed
work
by
the
cappy
team.
F
That
brings
this
vitally
important
feature
to
cf
cates
and
to
talk
through
the
feature
roadmap
from
here,
how
we
plan
to
provide
a
better
operator
experience
and
how
we
plan
to
expose
metrics
to
the
operator
allowing
them
to
troubleshoot
when
things
perhaps
don't
go
quite
as
expected,
so
I'd
like
to
hand
over
to
piyali
who
can
do
the
demo
for
you?
Thank
you.
E
Thank
you
for
the
introduction
paul.
I
will
share
my
screen
right
now.
E
All
right
is
this
font
good
for
everyone,
font,
size
cool,
so
just
to
explain
what
these
four
windows
you
see
in
front
of
you
are
on
the
top
left.
I
will
be
basically
performing
the
actions
that
an
operator
would
do
to
actually
carry
out
and
trigger
a
stack
update
in
cf
for
kate's
and
on
the
top
right.
I
will
show
you
the
stack
actually.
I
could
just
show
you
right
now
what
stack
we're
currently
on.
E
So
if
I
go
to
the
specs
section,
and
specifically
this
run
image
as
you
can
see,
we're
on
0.0.51
for
this
run
image
and
for
the
build
image
we
are
on
0.0.50
and
the
build
image
and
run
image
are
separate.
E
But
in
this
in
today's
demo,
we're
only
going
to
be
changing
the
run
image,
but
you
can
change
both
if
you
want
and
on
the
bottom
left.
We
have
images
so
in
cf
for
kate's,
for
those
who
don't
know
we're
using
kpac
to
actually
build
the
images
for
the
apps,
the
oci
images
for
the
apps
and
pushing
them
to
an
app
registry,
and
basically
the
general
flow
of
a
cf
push
is
that
irene
will
then
essentially
take
those
images
and
put
them
into
staple
sets
which
manage
the
pods
that
run
the
app
workloads.
E
E
So
when
I
will
trigger
the
stack
update,
all
I'll
have
to
do
is
this
cube
control
patch
command,
and
I
will
change
this
to
0.0.52
so
say
the
operator
sees
that
there
is
a
minor
patch
version
available
that
will
take
care
of
some
security
vulnerabilities.
Then
they'll
want
to
immediately
patch
instead
of
waiting
for
the
next
bosh
release,
for
example
in
the
previous
world.
So
now
they
can
easily
patch
whenever
there's
a
new
stack
update
available
and
they
could
just
run
this
keep
control
patch.
E
Command
and
after
the
stack
is
patched,
then,
basically,
the
k-pac
float
gets
triggered
immediately
and
the
images
all
get
rebased
by
k-pac
onto
the
new
stack
and
that's
what
you
see
on
the
bottom
left
where
all
the
images
went
green
and
one.
I
can
show
you
one
example
of
what
this
would
look
like.
So
if
we
take
this
image,
for
example,
and
we
go
down
to
the
status
section,
we'll
see
that
there's
a
latest
build
reason
and
a
latest
image
section.
E
So
the
latest
build
reason
here
is
stack,
which
means
that
this
build
was
only
done
because
of
a
stack
update
and
only
because
of
a
stack
update
and
for
this
latest
image.
You'll
see
that
there's
the
oci
image
reference
that
points
to
the
latest
build
image
by
kpac,
which
includes
the
new
stack
in
it,
and
the
reason
that
this
is
important
is
because
the
way
that
cappy
is
now
handling
it
rather
than
diego
and
bosh
in
the
previous
world
is
that
cappy
has
a
new
component
called
cf
api
controllers
and
also
on
the
bottom
right.
E
By
the
way
you
can
see
that
the
stack
update
is
actually
happening
in
front
of
your
eyes.
Canines
is
really
cool
to
show
you
that
in
real
time,
but
basically
cf
api
controller
component,
that
I'll
show
you
is
optimizing.
E
The
kubernetes
rolling
update
strategy
for
stateful
sets
to
carry
out
the
stack
update
and
that's
what
you
see
over
here.
This
will
probably
be
a
good
example
because
it
has
20
instances,
but
you
can
see
that
one
at
a
time
these
pods
are
being
rebased
basically
and
are
actually
getting
restarted.
With
this
new
stack
and
on
the
top
right,
I
can
show
you
the
new
component
so
for
observability.
E
E
So
yeah,
so
here
are
all
of
the
actions
that
are
currently
happening
from
our
cf
api
controller.
So
what
it
does
is
basically
every
time
there
is
an
update
or
some
level
of
event
that
happens
to
the
image
resource.
The
cf-api
controller
image
controller
would
basically
process
that
event
and
carry
out
some
actions
accordingly.
E
So,
for
example,
when
it
sees
that
there
is
an
image,
updated
and
specifically
updated
with
this
stack
build
reason,
the
image
controller
will
then
carry
out
and
actually
update
the
staple
set
directly.
So
there's
no
involvement
with
irene.
Yet
on
this
flow,
the
api
controller
is
directly
updating
the
stateful
set
with
the
new
oci
image
reference
that
comes
from
kpac,
and
this
will
change
shortly
in
the
future.
E
We
are
planning
on
optimizing,
irene's
new
lrp
that
they're
developing
to
update
the
lrp
directly,
so
that
irene
will
remain
as
the
only
component,
that's
interacting
with
the
staple
sets.
But
for
now
the
api
controller
is
directly
updating
the
stateful
set,
and
we
also
have
a
couple
other
future
tracks
of
work
that
are
really
exciting
that
are
coming
up
soon.
E
One
is
you
can
see
that
observability
is
pretty
difficult
to
be
trailing
all
these
logs
through
api
controllers,
so
we're
going
to
be
developing
a
better
way
for
operators
to
understand
the
progress
of
the
stack
update
and,
even
though,
like
the
bottom
right.
Canines
window
is
pretty
cool
where
you
can
actually
visually
see
it,
but
it
would
be
good
to
have
some
metrics
on
what's
going
on,
so
that
the
operator
can
go
and
troubleshoot
if
something
goes
wrong,
for
example,
and
we
also
have
another
feature
coming
up
to
support
rollbacks
app
rollbacks.
E
Our
new
feature
would
enable
automatic
rebasing
of
previous
revisions
as
well,
so
that
when
the
app
developer
rolls
back
to
a
previous
revision,
they
would
not
have
to
re-push
and
restage.
E
B
So
cool
I
mean
this
is
something
that
I
know:
customers
really
are,
or
or
people
who
are
interested
in
cloud,
foundry
and
and
operators
that
are
thinking
of
adopting
it
when
they
realize
that
they
can
do
this
kind
of
update
and
update
underlying
container
images
without
touching
the
code
on
mass
in
an
automated
fashion.
B
It's
it's
one
of
the
secret
sleeper
awesome
features
of
cloud
foundry
that
that
make
it
really
enterprise
ready,
and
so
I'm
really
glad
to
see
this
is,
is
up
and
and
and
going
in
in
cf,
for
kate's,
world
and
and
in
irene
or
with
irene.
I
should
say
yeah.
B
Thank
you
both
of
you
paul
and
pialy.
If
mario's
here,
please
could
you
speak
up
and
maybe
share
your
screen
and
tell
us
a
little
bit
about
quark
secret.
I
copy-pasted
something
in
it
in
the
agenda.
It
says
new
quarks
secret
feature,
but
it's
a
new
quark
secret
feature.
G
Yeah
yeah:
well,
it's
not
really
new.
I
talked
about
it
yesterday.
Already
I
recognized
some
faces,
so
it's
like
a
rerun,
but
okay.
I
think
this
time
I
know
how
to
use
zoom.
So
that's
an
advantage.
G
So
we
we
call
the
new
feature
templated
configs,
because
it's
basically
templates
that
generate
configs,
so
quark
secret
is
a
component
of
our
quarks
operator
that
we
use
to
deploy
qcf
on
kubernetes
and
quark
secret.
Specifically,
is
there
to
generate
secrets
inside
the
cluster
and
it
uses
a
cid,
and
then
the
output
is
stored
in
a
in
a
secret,
for
example,
for
password
certificates
and
so
on.
G
Some
of
the
future
features
are
like
that
you,
you
can
bring
your
own
secret
and
it
won't
update
it.
It
will
just
use
it
if
it's
there,
that's
what
we
use
in
the
qcf
deployment
like
you
can
create
all
these
secrets
by
hand
and
then
quark
secret
will
fill
in
the
missing
ones
and
the
ones
you
provide
like
the
database
password.
So
this
will
be
yours
until
you
decide
to
change
that
yeah.
G
It
also
supports
some
some
kind
of
rotation
more,
like
updates
of
existing
secrets,
so
certificates
can
be
regenerated
and
we
added
functionality
to
copy
secrets
into
other
namespaces,
so
that
you
well,
if
you
want
to
use
more
namespaces,
you
have
the
secrets
where
they
can
be
mounted
yeah
and
it
works
with
quarks
restart,
which
is
still
in
the
quarkx
operator,
the
quarkx
operator,
being
the
one
big
operator
which
we
started
to
do.
This
is
mostly
concerning
now
now.
G
It's
mostly
concerned
with
some
converting
bosch
manifests
into
kubernetes
structures,
and
we
all
these
smaller
components
like
standalone
components.
We
extract
them
from
the
main
code
and
make
them
run
as
separate
operators
and
they
have
separate
hand
charts.
So
you
can
use
quark
secret
on
its
own
right.
So
we
we
have
these
secrets
and
normally
you
would
just
mount
them
or
use
them
as
environment
variables.
G
Right
and
then
you,
your
12
factor
app,
would
be
fine,
because
all
the
configuration
would
be
there
would
be
secure
in
environment
variables
and
so
on
and
so
on,
but
not
all
applications
work
that
way
some
still
require
config
files
or
they
need
to
read
certificates
from
files
on
disk
also.
So
the
the
existing
situation
is
like,
like
this,
that
you
have
a
volume
which
is
in
reality
like
a
reference
to
the
secret
like
here,
the
gora
cert,
and
then
you
can
use.
G
If
you
want,
you
can
use
projection
to
map
the
keys
to
different
paths,
so
private
key
could
become
key
certificate.
Oh
well,
I
didn't
even
change
it
and
you
can
also
use
sub
paths
as
if
you
don't
want
the
files
in
a
in
the
directory,
but
you
want
them
as
separate
files
and
there's
this
weird
thing
that
your
port
will
get
updated.
G
It
will
get
the
new
secret
content
in
the
file,
but
only
eventually
so
at
some
point
after
a
minute
or
who
knows
kubernetes
decides
to
update
the
data,
except
when
you
use
a
sub
path
and
also
not
for
environment
variables.
Only
for
files
and
yeah.
I
guess
this
is
well
known
right
and
here
you
can.
You
can
mount
the
the
secrets
here
like
search
you
can
multitude
path
and
for
the
environment
variables
right.
You
can
set
them
to
static
values
for
ex,
for
example,
the
five
parts
here,
it's
aguara
certificate.
G
What
we
did
mount
here
at
the
bottom
or
you
can
reference
the
key
directly
for
the
environment
variable
right
and
now
we
want
to
combine
those
in
templated
configs
so
to
build
application,
config
files-
and
I
think
cubecf
will
start
using
this.
This
was
the
main
idea
for
uaa,
so
we
don't
want
to
consume
the
bosch
release
anymore.
Instead,
we
want
to
use
the
native
uaa
release
and
that
somehow
needs
to
share
configuration
with
the
existing
settings
from
keepsafe.
G
G
Yeah,
we
will
do
this
for
all
the
keys
right.
Each
key
can
be
attached.
Each
value
of
a
key
can
be
a
template,
so
the
demo
I'm
going
to
show
will
be
using
this
input
here
and
I
think
it's
better
to
switch
to
the
terminal.
Now.
G
So
you
should
be
able
to
see
my
terminal
and
the
k9s
on
the
side,
yes
cool,
and
so
this
is
the
templated
config.
I
will
also
create
the
secrets
here.
I
will
create
the
secret
with
a
key
named
password,
which
has
a
password
on
the
third
which
is
a
third
and
then
I
will
use
a
private
key
and
certificate
keys
from
the
third.
I
will
map
them
to
to
key
and
desert,
and
then
this
will
will
live
under
values
and
can
be
used
here
in
the
template
section
of
the
cid.
G
Since
it's
string
string.
This
is
a
more.
This
is
an
inline
yaml
and
since
we're
using
home
templates,
we
can
actually
use
some
hand.
Functions
like
to
json
yeah
and
here
at
the
top,
is
the
input
data.
G
G
G
So
now
I
can
take
a
look
at
the
generated
secrets
and
I'd
like
the
the
password
here,
for
example,
is
what
you
would
expect.
It
has
a
key
name
password
and
dentistry,
and
the
template
is
much
more
interesting
because
it
has
the
described
mapping
here
like
on
tls,
cert
and
tls
key,
and
then
the
inline
yaml
with
the
array,
and
it
has
the
same
values
again.
G
Here's
the
json
output
from
the
from
the
debug
field
yeah,
and
we
hope
that
this
can
also
be
used
for
more
complicated
configs
in
the
future,
because
now,
with
all
the
secrets
living
in
with
all
all
the
credentials
living
in
kubernetes,
we
we
will
have
trouble
you
getting
home
to
read
them
right
so
generating
those
conflicts
has
to
to
happen.
In
cluster,
we
made
the
cid
a
little
bit
template
agnostic.
It's
not
like.
We
really
support
different
template
types,
but
we
could
imagine
that
we
support
more
than
ham.
G
If
somebody
likes
mustache
or
something
yeah,
that's
that's
it.
B
I
have
a
couple
questions
just
or
at
least
one
just
so
everyone
understands
why
why
the
quarks
team
made
this
and
how
it
might
be
more
universally
useful.
So
this
feature
was
designed
to
enable
cubecf
to
consume
the
new
uaa
distribution,
the
non-bosch
release
kubernetes
release
of
uaa.
B
G
So
I
think
when,
when
we
add
more
native
components
to
qcf,
we
will
have
to
provide
config
and
as
long
as
the
bosch
manifests
and
the
hem
chart
right
that
generates
those
manifests.
G
As
long
as
this
is
our
single
source
of
truth,
we
will
have
to
somehow
share
and
at
some
share
these
credentials
and
at
some
place
right
either
at
deployment
time
or
at
runtime
inside
the
cluster.
We
will
have
to
generate
those
config
files,
and
I
could
imagine
that
you
also
try
to
get
him
to
fetch
those
info
this
those
files
from
the
cluster.
But
then
you
you
have
like
a
race
condition.
G
You
have
to
wait
for
them
to
be
generated,
so
it's
easier
to
to
do
it
in
clustering
for
c4,
kids,
I'm
so
from
the
other
direction
right.
You
have
all
native
components
and
you
have
no
bosch
right.
No
leftover
bosch
that
you
have
to
eliminate
step
by
step.
I
think
they
generate
most
of
their
contact
config
files
before
when
they
do
the
ytt
step
yeah
right.
So
they
probably
don't
need
this
yet,
but
it's
more
and
more
secrets
live
on
the
cluster
and
not
on
the
disk
of
the
deployer.
F
Can
speak
to
that
a
little
bit
as
I'm
standing
in
as
the
pm
for
relent,
and
we
are
evaluating
quark
secret
and
a
couple
of
other
standalone
components.
They
have
or
will
have
very
soon,
quarks
job
and
the
quarks
restart
that
can
restart
pods.
In
order
to
move
the
management
of
secrets
into
the
cluster.
B
This
makes
me
very
happy.
Thank
you
paul,
because
I'd
like
to
see
this
work
being
used
as
widely
as
I
can,
and
thanks
mario
for
for
for
showing
that
were
there
any
more
questions
about
the
future.
B
Okay,
thank
you
presenters.
I
hope
everyone
stays
safe.
I
know
some
of
us
are
in
fire
zones.
Some
of
us
have
limited
air
quality,
we're
all
dealing
with
covid
madness,
so
stay
safe,
be
kind
all
those
good
things
and
thank
you
for
joining
us
this
week.