►
From YouTube: Cloud Foundry Community Advisory Board [February 2020]
Description
CAB Call agenda: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit?usp=sharing
A
Awesome:
okay!
Well,
thank
you
everyone.
This
is
obviously
their
February
Claire
pfann
dream
cab
meeting,
so
I
am
NOT
Troy
top
neck
Troy
turtleneck.
Is
your
new
house
taking
over
from
dr.
max
who
served
as
well
for
the
last
I
think
three
years
Troy,
unfortunately,
is
running
a
bit
late
today,
so
he
asked
me
to
kick
things
off.
Hopefully,
I'll
join
and
we
can
that
we
can
come
formally
welcome
him.
A
So
I
guess
we've
got
the
usual
updates
on
the
agenda
so
we'll
get
this
from
cff
highlights
pmc
project
highlights
and
then
we
have
hopefully
a
couple
of
community
projects
that
we
can
that
we
can
hear
from
the
guys
and
and
and
people
there
see
what's
going
on.
So
let's
starts
wanna.
Maybe
you
could
give
us
an
update
on
what's
happening,
galactically
in
cff
sure.
B
I
think
the
biggest
of
it
is
definitely
the
Cloud
Foundry
summits.
I
know
the
I
hope
you
all
know
that
these
are
co-located
with
open
source
summits.
Now
it
is
for
this
year
in
Austin
for
North
America
and
Dublin
Ireland
for
the
European
events.
The
call
for
papers
for
open
source
summit
closed
last
weekend,
but
our
o,
our
call
for
papers,
is
still
open,
I
think
I'm.
Looking
at
the
agenda
again,
the
call
for
papers
I
think
closes
sometime
in
March
I
will
put
in
the
exact
date
here
we
did
have
a
call
for
co-chairs.
B
As
always,
we
were
looking
for
nominations
for
coachers
I've
received.
So
so
far
only
a
handful
of
nominations.
I
could
proceed
with
that,
but
I
would
like
to
keep
that
open
for
one
more
day
they
were
supposed
to
close
last
night,
but
I
will
keep
that
keep
this
open
for
one
more
day,
so
that
folks
can
maybe
have
their
last
minute
nominations
sending
in
or
the
co-chair
nominations.
B
The
only
difference
with
co-chairs
this
time
is
that
we
will
not
just
be
asking
the
cultures
to
look
at
the
submissions
and
help
us
rate
the
sessions
or
set
of
the
sessions,
but
we
will
also
be
looking
for
there
advise
or
guidance
as
to
what
the
format
should
be,
because
we
only
have
two
tracks
that
developers
track
and
the
contributor
track.
So
instead
of
having
seven
different
tracks
for
a
single
day,
event
we'd
reduced
it
to
just
two
tracks.
B
So
to
answer
your
question
Tyler,
it
would
be
just
for
two
tracks,
the
developers
track
and
the
contributors
track,
and
we
are
looking
for
the
co-chairs
guidance
on
structuring
the
day
or
the
format
as
well.
What
would
be
the
best
given
the
submissions
or
given
the
single
day,
even
format?
What
would
be
the
best?
B
It
would
be
a
good
idea
to
have
session
after
session
after
session
the
way
you
we
usually
have
with
the
breakout
talks,
or
will
it
be
good
to
have
a
roundtable
discussion
or
a
retro
kind
of
a
discussion
and
then
move
into
session.
So
we
would
look
for
the
co-chairs
to
give
us
that
kind
of
a
guidance
mainly
because
this
time
we
also
will
not
be
having
keynotes
it
is.
As
of
now
we
will
we're
not
planning
on
having
a
keynote
stage,
given
it's
a
single
day
event.
B
We
did
not
want
to
take
away
much
time
from
the
breakout
talks
or
from
this
rest
of
the
discussions.
That
would
be
more
useful
for
the
community,
so
at
least
for
now
tentatively,
we
are
not
planning
on
keynotes,
so
we
would
be
looking
for
the
co-chairs
as
guidance
on
setting
the
structure
for
the
rest
of
the
day
as
well.
B
So
it's
a
little
more
important
as
I
mentioned
it
in
the
blog
post,
then
it
has
always
been
I
mean
it
has
always
been
important,
but
this
time
it's
a
little
more
important
because
you
all
will
be
structuring
the
day
as
well.
The
flow
of
the
day
as
well
so
with
that
I
will
update
all
of
the
links
in
the
meeting
notes
and
I
will
pass
it
back
to
Neil.
Unless
anyone
has
questions.
B
A
C
I
know
if
there's
some
component
teams
that
have
been
getting
their
components
up
to
their
latest
stable
templates
and
configuration
as
part
of
that
as
one
example,
Cappy
is
progressing
on
both
their
work
to
provide
Kate's
packaging
and
deployment
artifacts
for
the
clock,
controller
components
as
well
as
their
integration
with
the
capac
machinery
to
run
build
pack
staging
tasks
using
cloud
native,
build
packs,
so
I
think
they're
very
close
to
having
full
CF
push
workflow
working
end
to
end.
So
that's
very
exciting.
C
Vlog
creators
also
moving
forward
on
integrating
their
new
architecture
into
the
CF
rockets
integration
framework.
That's
all
based
on
fluid
D
for
the
log
collection
and
they're
standardizing
that
syslog
for
transport
across
components
and
then
also
networking
they've
had
their
sto
beast
ingress
integrated
into
that
for
a
while
and
they've
been
doing
more
now
to
take
more
advantage
of
the
histiocytes
cars
to
provide
TLS
between
system
components
and
that's
I
think
wrapping
up.
C
So
they
are
now
also
starting
to
focus
on
more
configuration
for
certificate
management,
so
actually
being
able
to
configure
certificates
on
the
ingress
gateway
for
TLS
termination
and
doing
other
management
of
certificates.
Inside
of
the
system,
component
coordination
UA
also
they're
continuing
to
progress
on
their
kids
to
playable
artifact,
which
I
think
it's
really
generic
and
likely
to
have
applications
outside
of
just
the
CF
integration
responsibilities.
C
Also
volume
services
they
have
been
working
on
getting
their
existing
support
for
shared
SMB
volumes
for
app
containers
into
CF
frigates,
I
think
things
that
they've
been
doing
have
been
targeting
both
relevant
project
and
keep
CF.
So
I
think
they're.
Wrapping
that
up
and
they're
starting
to
look
more
deliberately
deliberately
at
expanding
support
for
their
volume
attachments
to
things
like
single
attached
storage
and
then
just
more
on
an
administrative
note.
C
The
Diego
team
they've
decided
to
try
out
just
using
github
for
any
kind
of
story,
tracking
and
issue
intake,
so
they've
I
think
at
this
point
cleared
out
their
tracker
project
and
they're,
just
focusing
on
github
as
more
community
accessible
resource
so
giving
that
a
bit
of
an
experiment.
I
see.
We
have
a
Jules
here
and
the
updates
from
Irene
ii-era
garden,
which.
E
E
Isolation
segments,
although
I
mean
well
that's
a
whole
other
thing,
but
you
have
lots
of
nice
new
options,
we're
keeping
at
ease
for
some
of
those
things,
but
more
more.
The
user
facing
side
of
things
things
that
users
might
expect
to
work
like,
as
you
say,
like
tasks,
a
lot
of
deployments,
rollbacks.
That
kind
of
thing
very.
A
B
A
A
I
know
they
have
a
I
think
they
have
their
meeting
on
this
coming
Monday.
So
look
in
the
camera
as
well
for
that
one
and
join
if
you
interested
all
right
wow.
That
was
a
quick
with
truth.
Emc
highlights
so
I
guess
this
is
the
always
interesting
bit.
Not
that
the
last
it
wasn't
interesting.
Is
we
get
to
hear
from
people
in
the
community,
so
I
guess?
First
up
we
have
CF
smoke
tests.
Oh
no,
do
you
want
to
take
it
away.
F
Sound
check
one
two
three
so
next
to
me:
okay,
yes,
thank
you.
Let
me
see
if
I
can
share
my
screen.
It's
the
first
time
I'm
sharing
using.
Let
me
see
it
says
like
this.
Is
that
showing
up
my
presentation,
absolutely
yep?
Okay,
just
checking
because
normally
I
will
do
this
from
my
laptop,
but
I
have
been
banned
to
the
Attic.
So
now,
I'm
sitting
with
my
regular
PC
and
I
just
had
to
install
the
zoom
and
everything.
What
if
it's
working
then
yeah?
F
Well,
it's
it's
actually
not
yeah
I,
don't
know
really
what's
definition
of
a
community
project,
basically
I
asked
Swarna
like
how
can
we
make
things
available
in
the
best
way
and
see
just
adjoining
this
call?
So
basically
it's
about
yeah
smoke
tests.
Basically,
at
this
moment
we
have
what
we
call
the
top
native
application
platform
yeah.
Well,
we
are
relatively
small
compared
to
the
big
guns,
of
course,
like
Susie
and
others,
but
we
are
government
organization
and
basically,
we
have
to
cope
with
our
own
situation.
D
F
F
F
We
also
added
one
for
communities.
Basically,
it's
checks.
If
the
platform
is
operating,
if
anything
fails,
we
will
get
an
alert
on
the
select
channel
yeah
and
we
also
have
a
nice
dashboard.
We
can
share
with
the
service
desk
and
also
our
developers
can
also
check
out
the
dashboard.
It's
just
an
application
running
on
the
platform
itself,
which
shells
if
everything
is
green,
more
or
less,
and
if
it's
all
green,
an
application.
There
is
because
that's
what
happens
ever
happen,
often
in
the
past,
an
application
stopped
responding
and
the
platform
gasps
blades
and
yeah.
F
F
That's
one
thing:
we
were
missing
when
we
started
out
as
Prometheus
and
kevanna
for
monitoring
cloud
foundry
all
the
boss,
VMs
were
monitored,
except
for
boss
director
itself,
I
mean
when
we
had
a
problem
with
borscht.
We
were
not
even
aware
of
the
problem.
Until
we
realized
we
could
no
longer
manage
the
borscht
vm's.
So
we
added.
F
D
F
Here
we
have
a
screen
shot
of
our
smoke.
This
pipeline
I
was
not
sure
if
I
could
show
this
life,
so
I
made
some
screenshots
because
yeah,
it
all
depends
on
the
network
connection
I
guess.
Basically
it
runs
every
five
minutes.
It
pushes
an
app
and
the
app
itself
will
check
if
it
can
connect
to
all
its
bond
services
and
execute
some
basic
tests,
for
example,
for
database.
It
creates
a
table
inserts
and
records,
tries
to
read
it
again,
drops
the
table
and
yeah.
F
F
F
Well,
this
is
I,
guess
obvious
I
guess,
I
already
mentioned,
we
had
no
monitoring
on
borsch,
so
the
Reaper,
which
is
mentioned
at
the
end,
also
contains
a
bush
exporter.
But
if
somebody
now
is
going
to
tell
us
there
is
a
standard
solution,
I'd
love
to
hear
about
it,
because
this
is
a
custom-built
borscht
exporter
to
monitor
borscht
itself,
and
this
is
still
we
need
to
work
on.
We
don't
have
this
yet
in
place
and
well
again.
D
F
Let
me
see
well
the
main
reason
we
have
the
smoke
test
basically.
Well,
we
can
try
to
prevent
things
from
becoming
really
a
problem.
That's
why
we
initiated
this
project.
In
the
first
place,
we
did
ran
into
some
issues,
so
we
would
very
much
like
to
hear
from
other
people
how
they
men,
how
they
manage
smoke
testing
their
platform.
If
they
do
because
in
both
cases,
we
either
have
the
boss
director
filling
up
storage.
F
Well,
these
are
the
things
we
are
still
working
on.
We
want
to
keep
track
of
availability
of
the
platform,
somehow
some
kind
of
nice
figure
for
our
management
to
determine
how
much
uptime
we
actually
have
and
I'm
not
sure
if
this
is
an
excuse,
it's
just
angry
words,
because
we
do
often
built
back
up
grades
or
okay.
So
we
would
like
very
much
feedback
on
this
point
and
also
yeah.
Last
but
not
least,
any
feedback
would
be
very
much
welcome,
because
I'm
sure
we
are
not
the
only
ones
facing
problems
like
we
did
and
well.
F
F
D
F
No
definitely
I
should
check
it
out.
B-Because
yeah,
we
found
fairly
little
feedback.
We
did
notice
that
whenever
everywhere
I
did
I
said
a
presentation
on
on
the
CF
summit.
Twice
that
doesn't
really
make
me
experienced,
but
I
did
notice
that
whenever
we
were
talking
about
the
smoke
test
and
people
were
like
hey,
we
need
something
like
this
and
that
triggered
me
like.
Maybe
we
should
make
this
publicly
available
in
some
form
and
yeah.
A
D
Hey
this
is
and
yep
we
will
also
be
presenting
a
smoke
test
and
our
senior
engineer
peers
is
on
the
call,
and
if
you
can,
he
came
as
a
presenter
he'll
be
presenting
today,
and
then
we
also
have
Ally.
You
also
worked
on
the
snow
test
our
joining
today,
but
pH
P
will
be
the
one
to
talk
about
the
scale
at
which
will
run
at
t-mobile
and
how
small
tests
have
really
helped
us.
D
G
G
Oh
no
definitely
I
would
like
to
say
that
you're
on
the
right
path,
we
are
trying
to
do
similar
stuff
just
in
a
different
context,
at
a
different
scale,
and
definitely
I
will
touch
base
on
some
of
the
things
that
Ono
has
already
set
the
context
for
and
before
I
move
on
with
the
presentation.
Although
everyone
is
aware
of
what
smoke
tests
are
and
what
are
they
choose
for
I
like
to
give
an
analogy
for
the
whole
drive
to
initiate
this
effort,
although
we
have
open
source
smoke
tests
available.
G
So
what
was
the
need
for
us
to
sort
of
build
these
from
scratch?
Well,
the
simplest
analogy
I
can
think
of
is
monitoring,
D
health
and
performance
of
a
car,
let's
say,
for
example,
if
you
want
to
make
sure
that
your
car
is
working.
Finally,
one
thing
is
to
monitor
all
the
metrics,
whether
your
store
materials
working
fine,
whether
your
brakes
are
working
fine
and
things
like
that,
but
the
other
approach
is
to
see
whether
all
those
components
can
work
together
and
that's
what
the
tribe
is
for
us
to
build.
G
These
smoke
tests
want
to
test
the
workflows,
not
individual
health
of
the
met,
the
critical
components
of
count
foundry
but
overall
health
as
to
verify
that
those
components
of
working
well
together.
So
that's
our
drive
to
building
smoke
test.
We
have
the
platform
engineering
team.
We
operate
both
platforms,
foundry
and
kinetics.
G
The
scale
is
pretty
large.
At
t-mobile
we
are
trying
almost
80,000
plus
application
instances
on
foundry
and
15,000
plus
parts
on
kubernetes
across
two
data,
centers,
three
regions
and
so
on
pretty
much
every
team
that
builds
application
within
t-mobile
uses
either
of
these
two
platforms
and
given
the
inception
of
cow
foundry
within
our
team,
the
third
option
has
grown
tremendously.
High
am
obviously
plafond
rates
are
more
more
used
platform.
I
would
like
to
say,
and
that
basically
puts
us
in
a
position
that
monitoring
this
mission
critical
for
us.
G
So
we
obviously,
we
decided
with
the
open
source
smoke
test,
but
there
were
some
gaps
that
we
found
and
those
are
causing
some
issues.
To
begin
with,
there
were
some
upstream
changes
that
our
pipelines
are
pulling
and
the
pipeline's
would
break
if,
if
something,
for
example,
recently
one
app
got
deleted
from
the
public,
github
repo
and
our
pipeline,
one
of
our
pipeline
started
breaking
so
those
kind
of
things
sort
of
put
us
in
a
bad
spot
where
we
were
relying
on
something
open-source.
But
it's
something
changes
and
we
are
constantly
using
it.
G
G
There
were
some
operations
like
if
you
want
to
test
how's
your
my
sequel,
so
this
broker
is
drink.
You
don't
want
to
start
right
from
creating
walls
and
spaces
and
so
on,
because
those
operations,
don't
really
tell
you
anything
about
my
episodes
poker.
So
there
were
some
unnecessary
operations
going
on
with
those
motels
and
then
because
you're
doing
all
these
operations,
she
obviously
need
had
privileges
to
run
all
these
operations,
which
was
also
causing
a
security
risk
for
us
and
then
at
the
end
there
was
no
not
so
the
level
of
granularity
for
reporting.
G
G
So
with
all
these,
we
thought
that
let's
try
to
solve
this
problem
from
the
Big
Bang
right,
so
we
decided
to
go
ahead
and
write
customized
sweep
for
smokers
for
all
the
major
components
that
we
were
interested
in
and
the
goal
was
to
have
a
solution:
that's
reliable,
meaning
if
a
smoke
test
fails,
we
want
to
know-
or
we
want
to
be
very
sure-
that
there's
a
problem
at
the
platform,
rather
not
the
problem
with
the
smoke
test
itself.
So
we
want
to
make
it.
G
We
wanted
to
make
it
as
reliable
as
possible,
also
plug
and
play
play
because
you
know,
as
we
are
growing
our
platforms
and
offering
new
services
for
application
developers,
we
want
to
make
sure
that
we
are
not
wasting
too
much
time
writing
new
tests
from
scratch,
and
it
will
take
significant
amount
of
time.
So
we
wanted
to
add
new
for
me
to
be
able
to
add
in
foundation
and
Agni
test
as
quickly
as
possible.
G
Again,
foundation
onboarding
should
be
easy
enough
for
us
too,
because
for
last
one
year
we
have
grown
from
10
to
almost
25
foundations.
At
this
point,
and
we
are
constantly
increasing
the
number
of
foundations.
So
we
wanted
to
make
sure
that
our
framework
should
be
easy
enough
to
allow
us
to
add
new
foundations
fairly
quickly
and
then
not
just
adding
a
foundation,
but
also
define
the
pipeline
of
the
smoke
test
on
the
newer
formulation
as
quickly
as
possible
through
all
sorts
of
automation.
G
And
then
we
wanted
to
make
sure
that
every
smoke
test
job
is
customized
in
the
sense
that
for
most
execution
time
it
takes.
So
we
wanted
to
make
sure
that
we
are
running
each
and
individual
smoke
test
with
a
customized
frequency
on
every
single
foundation
and
then
some
level
of
bar
level
chena,
because
sometimes
when
the
service
progress
time
out,
you
need
to
go
at
the
Posche
level
and
clean
up
those
resources
so
wanted
to
make
sure
that
these
are
taken
into
account.
G
We
wanted
to
make
sure
that
we
have
enough
panel
of
metrics
so
that
we
can
bring
heuristics
on
top
of
them
and
last
but
not
the
least,
it
should
be
work
oriented.
So
the
goal
is
to
test
the
workflow,
but
not
necessarily,
you
know,
testing
the
visual
components
because
we
have
metrics
for
them
and
we
know
where
the
metrics
behave
differently.
We
know
that
you
know
certain
confidence
and
not
behaving
properly,
but
for
for
the
smoke
test,
the
purpose
is
really
to
test
the
workflows.
G
So
what
exactly
do
we
test
right
now?
It
does
mostly
those
services
that
we
are
offering
to
our
application
developers
and,
as
obviously,
the
poor
functionality
of
the
platform
needs
to
be
tested.
Then
autoscaler
is
one
of
the
most
utilized
service
among
our
application
developers.
We
offer
my
support,
Redis
rabbitmq
cloud,
cache
all
the
spring
cloud
services,
and
then
we
have
our
own
monitoring
tools,
metadata
tools
that
need
to
be
on
on
top
of
in
terms
of
their
health
and
then
are.
G
We
are
applications
able
to
send
box
to
spam,
which
is
our
flocking
and
monitoring
platform
and
and
then
we
started
testing
the
basic
community
services
as
well
going
forward.
We
have
more
in
world
class,
2's,
boss,
objects
releases,
you
know
deployments
and
such
load
balancer
block
stores
and
then
at
the
end,
because
our
platforms
are
flowing,
the
number
of
services
we
offer
is
growing.
We
definitely
want
to
make
sure
that
every
alert
that's
actionable.
G
We
want
to
build
automation
around
taking
steps
in
a
much
educated
and
experienced
way
to
recover
the
platform
of
the
components
that
are
failing
or
not
in
a
path,
not
in
a
state
to
recover
them
from
that
code
state.
So
that's
our
goal,
that's
those
are
the
things
that
are
coming
in
future,
just
to
give
a
quick
example
of
what
we
are
testing
and
how
we
are
testing.
So
this
is
an
example
of
autoscaler
smoke,
test
very
basic
workflow.
You
know
start
with
log
into
a
back
into
your
foundation
target
a
specific
organs
phase.
G
First,
we
checked
the
autoscaler
apps
that
are
provided
by
the
platform
are
up
and
running,
and
then
we
basically
download
the
plugin
and
go
through
a
complete
life
cycle
of
how
an
app
would
be
using
the
autoscaler
service.
So
right
from
creating
the
service
instance
pushing
a
sample
app.
You
know,
binding.
The
service
instance
scaling
the
app
frightening
some
or
creating
some
auto
scaling
rules
based
on
their
HTTP
latency
and
memory
and
CPU
requirements
and
so
on,
and
then
we
shall
read
some
fake
traffic
to
that
and
see
the
auto
scaling.
G
Events
are
taking
place
as
expected
and
at
the
end
we
do
all
the
necessary,
cleaner
and
once
everything
is
okay,
we
report
all
the
metrics
for
each
and
an
individual
step
here
to
our
matrix
platform,
which
is
plunk.
So
obviously
things
can
go
wrong
in
any
of
these
steps,
so
we
want
to
make
sure
that
we
have
not
wasting
time
executing
any
further
operations.
G
If
you
know
the
operation,
that
field
is
basically
a
blocking
operation,
so,
for
example,
if
the
create
service
fails,
there
is
no
important
to
do
anything
else
beyond
that
point,
because,
obviously
you
can't
create
an
auto
scaler
serviced
and
you
can't
really
bind
an
app
and
you
use
the
service
as
expected.
So
we
we
capture
the
results
of
every
single
step
here
and
build
heuristics
at
the
strong
side
same
example
for
spring
cloud
services.
G
We
basically
test
all
three
major
service
types,
meaning
config
several
service
registry
and
circuit
breaker,
and
obviously
they
require
the
app
that
we
are
using.
It
requires
mass
equal
as
well,
so
you
create
a
my
sequel
instances
where
I
can
check
the
entire
workflow
right
from
creating
the
services
to
finding
the
services
to
a
sample
app
and
then
making
sure
that
the
app
is
able
to
read
and
write
from
those
services
instances.
So
that's
our
entire
workflow
any
questions
so
far.
G
D
G
The
workflow
right
here
is
from
the
smoke
test
and
shoot
the
matrix,
because
if,
if
you
don't
stole
the
matrix
of
the
result,
we
can
build
heuristics.
We
can't
be
sure
that
what's
the
pattern
of
those
three
years
so
that
we
can
take
more
predictive
actions
and
actions
to
avoid
those
kind
of
situations
that
are
causing
on
those
three
years.
G
Just
a
sample
dashboard-
and
you
know
the
alerts
that
we
get
from
Splunk,
so
in
Splunk.
What
we
are
showing
here
is
that
you
know
these
are
the
foundations,
and
these
are
the
sources
which
is
basically
the
name
of
the
smoke
test,
job
itself
that
is
failing
on
this
particular
foundation,
and
here
is
a
concourse
URL
that
you
can
follow
to
actually
go
to
campos
and
look
at
the
logs
and
figure
out.
What's
what
actually
went
wrong
and
so
on?
G
So
we
have
the
matrix
available
at
that
kind
of
level,
and
we
can
obviously
look
back
in
past
and
analyze
how
many
times
a
particular
operation
has
been
failing
on
a
particular
foundation
or
queuing
region
or
foundations
and
such
so
that
really
helps
us
identifying
and
analyzing
those
problems
that
are
causing
these
failures,
just
a
high-level
overview
of
our
deployment
strategy.
So
we
have
a
booster
pipeline
which
basically
monitors
we
have
people
that
contains
the
pipeline's
environment
variables
and
the
smoke
test
scripts.
G
It
goes
ahead,
checks,
the
git
repo
and
any
changes
of
Kuran
get
report
deploys
the
pipeline
on
every
region,
specific
pom
horse
and
within
every
region,
specific
pound-force.
We
have
teams
dedicated
to
all
the
Foundation's
within
that
particular
region,
so
the
bootstrap
pipeline
on
any
change
will
take
the
code
and
deploy
the
pipeline
on
the
smoke
test
pipeline
of
each
foundations
within
their
individual
team
and
what
that
pipeline
contains.
It
contains
all
jobs
for
all
these
components
or
services.
G
We
use
world
as
our
secret
store,
and
it's
basically
designed
based
on
the
palm
holds
deployment
strategy.
So
every
region
specific
force
has
dedicated
path
in
the
world
that
you
use
and
so
on.
So
with
this
I'm
like
you
want
to
just
screenshot.
This
is
just
a
real
picture
of
what
we
saw
in
this
smoke
test
pipeline.
We
have
all
these
smokers
jobs
for
a
given
foundation
that
we
used.
So
that's
how
we
are
basically
testing
our
foundations.
As
of
now,
how
have
they
really
helped
us?
G
We
do
foundation
updates
really
every
three
to
six
months,
our
time
frames
and
given
the
number
of
foundations
we.
It
requires
a
significant
cycle
of
say,
eight
to
ten
weeks,
that's
very
critical
time
when
we
want
to
be
on
top
of
things
as
well
as
the
workflows
as
we
have
of
the
platform
is
concerned.
So
we
use
the
smoke
test
regularly
while
we're
doing
trades.
The
v-tail
season
is
very
important
for
us,
typically
from
September
to
December
timeframe,
very
meaningful,
not
new
phones
get
launched,
and
so
on.
G
We
want
to
make
sure
that
those
applications
should
be
able
to
use
the
platform
services
as
expected,
and
we
want
to
make
sure
that
we
are
using
the
smoke
has
to
verify
and
ensure
that
new
phone
launches
it
so
that
daily
basis.
Of
course,
we
from
these
pipelines
on
our
regular
interval
and
every
job
has
a
customized
frequency,
so
on
a
daily
basis.
We
want
to
make
sure
that
running
this
to
ensure
the
good
health
of
the
platforms
really
most
of
the
smoke
test.
G
Shops
have
helped
us
on
different
occasions,
but
typically
autoscaler
spring
cloud
services,
and
this
one
dedicated
to
see
the
logging
workflow.
Those
are
some
of
the
more
critical
ones
that
have
benefited
us
more
frequently
than
the
others,
and
the
reasons
are
obvious.
You
know
all
our
applications
are
or
I
would
say.
Most
of
our
applications
are
Auto
scale.
We
provide
these
recommendations
to
our
application
developers
on
a
regular
basis
to
follow
the
good
practices.
G
Have
you
know
me
any
far
application
having
water
scaler
binding
to
syslog
services
so
that
your
logs
are
available
in
span
and
so
on.
So
the
smoke
tests
that
actually
monitor
these
workflows
are
super
critical
for
us.
We
use
them
way
more
frequently
than
the
others
and
overall
I
would
say.
Every
single
smoke
reserve
have
just
in
one
way
or
the
other
at
one
point
or
the
other
really.
The
goal
is
to
know
the
problems
before
a
tree
impacts,
the
customers
and
I.
G
F
Because
we
also
had
some
some
issues,
let's
say
challenges
with
the
pivot,
all
the
scalar.
That's
why
we
also
included
in
our
smoke
test
and
you
you
do
have
a
history.
There
I
mean
that's
cool,
that's
exactly
the
thing
we
are
missing
is
that
because
you,
you
extract
the
data
from
Splunk,
or
is
that
not
where
the
metrics
go?
Yes,
so.
D
G
What
we
do
is
we
basically
Emma
dark
metrics
from
the
smoke
test,
which
is
running
in
compost.
Only
for
Telegraph
Telegraph
is
basically
an
open
source
sort
of
data
form
actor,
so
you
can
translate
whatever
quantitative
format
to
any
given
output
format.
So
we
take
JSON
and
convert
it
to
spunk,
matrix
format
on
using
Telegraph
and
shoot
those
metrics
just
wrong.
F
A
Many
other
questions
I
know
we're
heading
up
towards
time.
Thank
you
very
much.
I
was
super
interesting
actually,
so
there
was
a
great
suggestion
from
Wayne
in
the
chat
that
maybe,
between
those
two
projects
and
Wayne's
effort,
there's
a
kind
of
an
ik
extensions
project
in
the
making
that'd
be
great
to
see
all
right.
Thank
you.
Let's
see,
we've
got
eight
minutes
left
I
thinks
I
was
potential
to
give
us
a
date
for
KS
yeah.
H
We
just
we
were
chatting
offline,
I'm,
very
sorry,
everyone.
This
is
Troy
and
I'm
very
sorry
to
have
missed
the
first
part
of
this
I
will,
of
course,
watch
the
recordings.
Thank
you.
Thank
you
very
much
Neil
for
for
covering.
For
me,
I
I
just
chatted
with
sy
and
we've
agreed
to
postpone
that
item
for
next
week's
I.
Is
that
I
think
that's
what
we
just
agreed
so
yeah.
H
Camping
we've
got
a
bit
more
time.
The
keep
seeing
F
incubation
proposal
on
the
agenda,
I
think
Vlad
is
still
sick,
unfortunately
get
well
soon
glad
that
was
just
to
point
it
out
on
the
CF
dev
list.
If
you
have
any
comments
about
the
document
where
he
proposes
where
we
propose
incubating
cube,
CF
have
a
little
look
at
that.
That
thread
and
please
feel
free
to
make
comments
in
in
the
document
that's
linked
and
then
that's
going
to
be
circulated
amongst
the
the
PMC.
So.
B
H
That
could
be
on
the
CF
4k
AIDS
Channel.
We
talked
about
in
the
kubernetes
sig
yesterday.
How
the
CF
4k
is
channel
is
sort
of
an
overloaded
channel
name
because
it's
also
the
name
of
size,
repo
but
I
think
we're
all
there
or
maybe
better.
Yet
there
is
a
there's,
a
cube,
CF
dev
channel.
Let
me
just
drop
that
here.
H
F
I
ask
one
question
about
the
cube
CF
project,
certainly
because
yeah
I
did
I
think
the
lost
a
cup.
Coal
was
also
about
the
incubation
of
of
cube
CF
or
was
that
another
coal
or
intended
one
call
where
it
was
also
mentioned
what
what
we
were
interested
wealth.
We
are
interested
in
right
now,
since
we
already
run
cube
Anita's
and
we
run
Cloud,
Foundry
I
don't
know.
F
H
There
there
are,
in
fact-
and
this
came
up
as
I-
want
to
invite
people
to
who
are
interested
in
this
topic
to
join
the
sig,
the
kubernetes
sig
meeting,
which
happens
on
the
the
third
Tuesday
of
every
month,
so
the
next
one
will
be
on
the
17th
of
March
at
8:30
a.m.
Pacific.
I
can't
remember
what
that
is
in
central
Central
European.
H
There
are
two
initiatives
going
on
right
now
and
there's
a
bit
of
we
talked
about
this
yesterday,
there's
a
bit
of
a
confusing
landscape
for
the
uninitiated.
The
uninitiated
to
join
in
so
we've
got
a
little
committee
from
that
sync:
that's
going
to
get
together
and
untangle
it
and
point
people
to
the
right
places.
The
short
answer
is
cube.
Cf
is
going
to
be
the
sort
of
working
cloud
foundry
currently
based
on
Bosch
working
with
the
CF
operator
and
the
CF
4k
eights
is
the
integration
point
feel
free
to
chime
in
sigh
I.
H
Don't
want
to
speak
for
you
for
the
new
upstream
kubernetes,
if
occation
of
the
core
components
and
cube
CF
will,
of
course,
try
and
integrate
those
as
well.
So
there
are
two
projects
going
on
right
now.
At
cube,
CF
is
one
of
them,
but
at
the
moment
it's
not
under
the
cff
umbrella
we're
trying
to
get
that
in
so
that
it
best.
H
Hoping
to
be
able
to
announce
the
incubation
soonish,
okay,
once
it's
had
some
review
and
then
and
then
we'll
see
how
it's
progressing
but
cool.
Thank
you,
but
yeah.
Please.
Please
join
the
conversation
in
the
kubernetes
thing
if
you're
interested
and
there's
that
channel
as
well
on
slack
CF
for
fort
AIDS,
yes,
where
we
talk
about
both
projects
and
in
general,
the
directions
cool.
Thank
you
very
much
for
the
feedback
and.
F
H
D
H
B
B
Your
suite
and
their
smoke
test
suite
working
with
the
legal
team
I
will
probably
be
sending
out
any
email
to
the
team
of
folks
and
also
a
know
to
see
if
we
can
follow
up
on
the
conversation
like
continue
the
conversation
that
Wayne
had
proposed
for
to
merge
all
these
efforts
and
make
this
may
be
a
really
effective
extensions
project,
so
stay
tuned
for
that.
That.