►
From YouTube: Cloud Foundry Community Advisory Board Call [May 2020]
Description
Agenda available here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit?usp=sharing
B
Welcome
to
some
months
cab,
cults,
I
believe
it's
the
month
of
May
May,
that's
it's
not
April.
Iiii
can't
really
keep
track.
I
actually
literally
have
to
check
the
County.
April
is
anyways.
Welcome.
Thank
you
all
for
coming
to
the
Cloud
Foundry
community
advisory
board
meeting
for
the
month
of
May
I.
Thank
you
for
coming.
We've
got
the
usual
updates.
First
from
the
Cloud
Foundry
foundation,
we
have
chip
here,
maybe
others
that
I
can't
see
and
we
will
go
through
the
pmc
project
highlights,
and
then
we
have
two
presentations
from
t-mobile
and
I.
B
Believe
we
have
the
people
from
t-mobile
and
excuse
me:
I,
don't
have
the
the
names
of
the
people
from
t-mobile
in
front
of
me.
So
if
you
could
introduce
yourself
say:
let's
we
kick
off.
First,
dealing
with
noisy
neighbor
problems
on
CF
and
also
another
presentation
on
automating,
the
lifecycle,
management
of
large-scale
CF
environments.
So
to
kick
things
off,
can
I
turn
this
over
to
you,
chip
and,
let
us
know,
what's
happening
with
Cloud
Foundry
foundation,
yeah.
C
C
You
know
across
as
many
many
geographies
as
possible.
Sometimes
I
also
wish
that
we
were
actually
a
flat
earth
so
that
we
could
just
truly
eliminate
time
zones,
and
that
would
be
great
so
anyway,
register
schedules
out
registrations
open
anybody
on
the
CF
dev
mailing
list
should
have
seen
a
contributor
code
feel
free
to
use
it.
If
you're
on
that
mailing
list,
any
member
or
all
of
our
primary
member
contacts
that
receive
receive
codes
to
register.
C
So
lots
of
opportunities
to
get
a
get
registered
and
see
some
really
interesting
content.
I'll
say
it
I've
said
it
before
I'll
say
it
again.
This
is
a
virtual
event,
everybody's
going
virtual.
Nobody
knows
how
to
actually
do
it,
we're
all
learning
from
each
other.
Both
the
commercial
companies
and
Susa
is
an
example.
I
think
is
actively
running
a
event
right
now,
virtually
using
the
same
platform
that
we'll
be
using.
So
there's
there's
quite
a
lot
of
interesting
things
that
that
we're
going
to
learn
from
that.
C
A
A
D
Right
yeah
I
can
cover
that
yeah.
A
few
few
highlights
from
the
runtime
PMC
I've
had
some
big
releases
from
the
integration
projects,
release
integration
role,
DCF
deployment
version,
13
and
version
0.2
of
CF
frigates
over
the
past
month
and
not
to
be
outdone.
Cube
CF
released
its
version,
2.0
and
I
know
they've
been
busy
integrating
some
of
the
helm
charts
like
Iranian
ireenie
X
into
that
release
to
replace
the
bas-reliefs
derived
artifacts
and
then
also
there's
something
that
I
think
will
have
more
and
more
information
and
communication
about
over
the
next
few
months.
D
But
the
CLI
team
is
getting
ready
to
release
the
initial
GA
version
of
the
v7
CLI.
So
this
is
going
to
have
some
breaking
changes
compared
to
v6,
primarily
in
support
of
things
like
rolling
updates
and
other
commands
that
relate
to
the
v3
API.
Is
the
claw
controllers
been
developing
underneath
that
to
support
those
kinds
of
operations
over
the
years,
quick.
D
D
And
then,
in
terms
of
other
project
updates,
so
Kathy's
been
moving
along
with
its
integration,
with
kpac
in
the
gate
space
to
allow
for
operations
like
updating
the
build
packs
and
root
of
s.
That
key
pack
is
managing
FRAP,
Gatien
images
and
then
figuring
out
how
those
rid
of
s
updates,
for
example,
are
going
to
propagate
out
to
application
instances
in
that
context.
D
So
other
improvements
ireenie
now
supports
rolling,
deploys
for
applications
and
they
are
working
to
support
application
tasks
as
well
and
networking
I
know.
We've
discussed
this
in
some
other
arenas,
but
they've
been
driving
out
some
work
around
having
a
C
or
D
for
CF
France
and
so
they're
continuing
work
to
integrate
with
that
and
to
support
that
as
a
way
of
translating
routing
information
into
the
underlying
kubernetes
cluster
and
then
the
logging
and
metrics
team,
which
we've
recently
renamed
for
logger
gator
to
correctly
denote
their
full
scope
of
responsibilities.
B
And
I'm
gonna
take
wild
guests
that
we
also
don't
have
dr.
max
for
extensions,
because
he
is
trying
to
vacate
that
position
and
there
is
I
wanted
to
call
a
people's
attention
to
varnas
post
from
a
few
weeks
ago.
Seeking
nominations
for
the
Cloud
Foundry
extensions
PMC
lead
Souza
has
been
I've
been
looking
around
to
see.
If
there
was
some
volunteers
at
Sousa,
we
may
not
have
someone
that
I
can
nominate
for
that
position.
B
So
I
would
like
to
suggest
that
this
is
a
good
opportunity
from
for
another
organization
in
the
community
or
another
individual
in
the
community,
who's
interested
in
seeing
the
extensions
BMC
grow
just
up
and
the
volunteer
for
this.
You
can
talk
to
any
of
the
foundation,
people
or
myself
about
that
on
slack,
if
you
don't
feel
comfortable
speaking
up
now
or
nominating
yourself
in
the
mailing
list,
like
I
did
for
this
job.
B
So
just
calling
your
attention
to
that.
There
is
a
vacancy
we'd
love
to
see
someone
who
there's
a
passion
for
all
of
the
extensions
in
the
community
and
making
that
community
grow
a
little
bit.
So
please
have
a
look
at
that
post
and
have
a
think
about
if
you'd
like
to
to
leave
that
and
I'll
continue
looking
around
for
for
for
vol
and
telling
people
at
Sousa
with
that.
If
there's
nothing
else
from
the
the
pmc
projects
can
I
hand
it
off
to
somebody
from
t-mobile
and
I
believe
we
have
it.
A
A
E
So
when
I
say
scale,
our
scale
is
pretty
big
in
terms
of
PCI
futures
got
over
20
foundations
over
28
individual
deployments
of
PCF
in
various
stages:
centers
regions
across
the
country,
yeah
over
70,000
containers,
we're
700
million
daily
transactions,
3,000
individual
applications
and
over
a
hundred
different
application
teams.
So
it's
really
big,
really
a
complicated
problem
to
solve,
especially
given
the
size
of
our
team,
which
is
right
right
now
we're
about
four
or
five
people
that
kind
of
concentrate
on
this.
We
have
some
ancillary
help
from
other
teams,
but
it's
it's
a
big
challenge.
E
E
Excuse
me
frequent
to
your
log
loss
across
applications
on
PCF,
so
this
could
be
in
any
step
in
the
chain.
Essentially
it
could
be
an
hour
longer.
Gator
components
are
longer,
Gator
components,
analysis,
log
agent
and
the
newer
version
feeds
into
or
syslog
components
are
so
sly
components
pull
off
by
a
swung
forwarder
and
I'm
us
alongside
we've
got
indexers
and
various
components
so
and
any
step
in
the
chain
there.
E
We
could
see
issues
with
you
know,
CPU
or
memory
or
just
cues,
filling
up
and
so
on
and
so
forth,
and
what
we
found
out
was
that,
more
often
than
not
the
cause
of
the
problem
was
the
noisy,
neighbor,
so
an
application
that
was
just
logging
really
excessively
and
since
it's
a
shared
platform,
we've
got
the
containers
on
the
Diego
cell,
sharing,
sharing
the
components
to
get
the
logs
off
of
the
off
of
the
node
same
thing,
with
the
syslog
nodes
and
was
one
forwarder
said
there
was
a
typically
a
single
or
just
a
few
applications
that
were
responsible
for
flooding,
everything
and
causing
the
log
loss.
E
So
how
much
lagos
were
releasing
at
some
points?
We
were
seeing
up
to
75%
log
loss,
so
this
was
typically
somebody
would
be
doing
a
load
test
or
they
they
left
info
on
or
or
there
was
some
kind
of
a
stack
trace,
printing
out
a
thousand
line
stack
trace
line
by
line.
You
know
millions
of
times
a
second
or
something
ridiculous
like
that.
So
we
could
see
Peaks
like
that
on
a
sustained
basis.
We
could
see
in
excess
of
15%
log
loss.
E
Even
when
we
weren't
seeing
log
loss,
we
were
seeing
substantial
delays,
just
the
components
have
various
cues
and
so
on
that
fill
up
with
the
messages
and
then
those
messages
or
you
know,
pull
off
of
the
queue
and
eventually
sent
down
the
pipeline.
So
those
components
were
all
becoming
overwhelmed.
So
even
when
they
weren't
being
dropped,
we
would
see
delays
of
15
30,
sometimes
an
hour
due
to
various
issues
along
the
chain.
Another
issue
that
we
had
was
just
in
terms
of
the
physical
retention
after
logs,
actually
get
to
slunk
out
we're
retaining
them.
E
Of
course,
you
know
as
SSDs
and
platter
drives
and
so
on,
and
our
retention
is
determined
by
how
much
space
we
have
left
and
how
quickly
we're
using
that
space.
So
because
of
how
noisy
these
neighbors
were
because
of
how
much
they
were
logging
on
these
disks,
they
were
filling
up
really
quickly
and
it's
we
had.
You
know
under
four
days
retention,
which
is
not
really
helpful
for
teams
that
figure
out.
You
know
they
had
to
have
a
bug
today
and
they're
trying
to
trace
it
back
to
their
last
release.
E
You
know
a
week
ago
or
something
to
that
effect
and
the
logs
don't
exist
or
they're
trying
to
do
some
kind
of
reporting
or
reconciliation
or
just
trying
to
track
something
down.
You
know
it's
it's
unusual
to
catch
something.
Unless
it's
really
bad,
then
you
know
that
that
time
window
so
wasn't
a
good
experience.
E
Some
of
the
challenges
that
we
had
and
just
kind
of
tackling
the
issue
was
the
the
complexity
of
the
of
the
infrastructure,
the
fact
that
we
didn't
necessarily
own
every
component,
the
Splunk
team,
was
an
entirely
different
team.
We
didn't
have
a
good
view
from
the
application
perspective,
because
we
didn't
know
and
all
of
the
components
we
didn't
have
a
consistent,
alerting
approach
and
because
of
the
various
points
in
the
chain
where
we
could
see
this
issue,
it
was
ended
up
being
basically
an
ad
hoc
process
and
we
have
our
daytime
on-call
would
be
getting.
E
You
know
on
the
hunter
bridge
if
we
needed
to
and
it
just
it
took
a
long
time
and
people
were
losing
logs,
all
the
while.
So
you
know
it's
just
not
a
good
customer
experience.
So,
in
order
to
solve
it,
we
started
out
by
saying:
okay:
what's
our
you
know,
service
level
objective,
you
know
what
do
we
want
to
accomplish?
Okay,
how
do
we
get
the
users
to
help
us
accomplish
this
and
the
Terms
of
Service
from
the
users?
And
how
do
we
figure
out
when
users
or
customers
are
violating
those
terms
of
service?
E
And
what's
the
mechanism
we
can
use
to
inform
them
so
that
they
can
take
action?
So
the
objective
SLO
that
we
ended
up
coming
up
with
90%
we
felt
90%
was
reasonable
at
all
times.
You
know
what
I,
ideally
more
than
that
most
of
the
time,
but
that
we
should
never
have
you
know
fewer
than
90%
of
our
logs
reach
the
end
destination
with
trusses
long.
You
know,
within
a
reasonable
time
frame
shouldn't
seem
big
delays.
E
We
shouldn't
be
dropping
anything
and,
in
terms
of
excuse
me,
the
terms
of
service
for
the
user
is
just
based
on
our
observations
about
when
the
the
component
started
to
see
issues
and
based
on
some
of
our
conversations
with
pivotal.
We
decided
that
an
individual
application
incident
shouldn't
exceed
a
hundred
thousand
logs
per
minute
and
that
an
individual
application
instance
should
never
exceed.
E
You
know
a
burst
rate
of
more
than
a
million
logs
for
Mina,
which
sounds
ridiculous,
but
we've
had
a
applications,
log
a
lot
more
than
alex
per
minute
and
it's
a
wonder
they
stay
running.
But
we
do
see
that
so
in
terms
of
the
implicate
implementation
that
we
came
up
with
the
solution
was
a
set
of
micro
services
and
we
have
a
job
that
runs
on
concourse
by
a
cron
right.
E
Now,
it's
every
hour,
so
we
post
a
query
and
a
callback
URL
to
or
Splunk
query
service,
and
thus
one
query
service
has
a
knowledge
and
credentials
of
this
one
clusters
that
needs
to
interact
with
it'll,
execute
that
query
against
each
of
those
clusters.
We
have
one
for
NP
our
non
production,
one
for
our
payment
cards
concerns
and
then
one
for
production
traffic
from
E
to
those
that
determines
which
applications
are
violating
the
Terms
of
Service.
Just
based
on
you
know
some
query
language
and
it
gets
back
the
metadata
that
we
have
about
those
works.
E
Some
of
that
is
added
from
excuse
me
from
our
syslog
infrastructure
and
some
of
that's
just
there
by
default,
so
it'll
collect
that
information.
It'll
bundle
that
up
and
some
JSON
and
it'll
post,
that
to
our
noisy,
neighbor
microservice,
the
noisy,
neighbor
microservice
takes
those
organs
and
looks
in
our
internal
metadata
store
about
those
works
which
for
us
right
now
is
excuse.
Me
is
bitbucket.
E
It'll
get
back
the
configs,
they
show
we
tory's
map,
to
which
own
nursed,
which
in
our
internal
infrastructure,
will
be
employee.
Ids,
so
gathers
up.
Those
employee
IDs
sends
that
to
our
email,
lookup
service,
the
email
lookup
service
interacts
with
our
LDAP
service
converts.
Those
employee,
IDs
to
email
addresses
sends
that
back
to
the
noisy
neighbor
service
and
then
finally,
the
noisy
neighbor
service,
bundles.
That
altogether
puts
it
into
a
templated
email
and
shifts
that
out
to
our
customers
via
our
SMTP
server.
E
In
terms
of
the
email
itself,
we
have
some
helpful
tips
on
there.
You
know
some
things
that
we
we've
seen
in
the
past.
I've
mentioned
few
giving
of
the
talk,
so
logging
stack
traces
with
line
breaks,
so
you
could
see
thousand
line
stack
trace,
but
if
there's
line
breaks,
that's
a
thousand
individual
events
per
stack
trace
and
if
you've
got
some,
you
know
recurrent
error.
We've
seen
this
with,
for
example,
interactions
with
Costco
or
people
trying
to
reinitiate
sessions.
You
know
it's,
it's
thousands
and
thousands
of
lines.
E
So
in
terms
of
how
effective
it
is,
this
is
a
chart
of
when
we
first
released
this
and
it
does
really
work
which
we're
happy
with
so
we've
we've
been
consistently
meeting
our
SLO
since
we
released
the
noisy
neighbor
and
for
the
most
part
customers
have
been.
You
know
actually
glad
to
hear
that
they
they
have
no
idea
that
they're
logging
that
much
or
you
end
up
catching
an
error
that
they
weren't.
Even
aware
of
you
know
it's
it's.
E
It's
been
helpful
for
the
customer
helpful
for
the
health
of
the
platform
in
getting
all
of
our
logs
to
their
destination.
It's
improved
our
retention
of
the
salong
side
of
things
and
in
terms
of
our
support
or
on
call
during
the
day,
we're
saving
on
the
order
of
you
know
some
days
a
couple
of
hours
we'd
have
to
spend
on
this
that
we
don't
have
to
spend
now.
So
it's
it's
been
really
helpful,
and
that
is
the
last
slide.
So
if
anyone
has
any
questions,
I.
C
F
F
E
Occasion
spunk
has
has
been
the
problem
when
it
comes
to
log
loss
in
total.
That
typically,
would
have
been
in
our
locker
gaiter
infrastructure.
You
have
seen
instances
where
the
indexers
and
other
components
on
Splunk
we're
becoming
overwhelmed.
We
would
see
delays,
there
are
are
metrics
or
in
a
separate
index.
So
okay
yeah
a
bit
of
a
buffer
there.
F
F
A
G
Everyone
see
my
screen:
okay,
yes,
excellent.
My
name
is
Brendan
Indra
I'm,
also
on
the
platform
infrastructure,
engineering
team,
and
this
presentation
is
about
lifecycle,
automation
and,
obviously
the
team.
Oh
so
refresher
on
our
scale
has
even
already
covered
this
20
plus
foundations,
70,000
containers,
some
hundred
million
daily
transactions,
3,000
plus
applications
and
100
very
different
dev
teams
fairly
large-scale
that
we
have
to
deal
with
in
this
matter
so
challenges
before
automation,
consistency.
This
was
a
big
one.
G
Every
time
we
brought
a
foundation
online,
the
parameter
changes
are
the
tiles
that
we
used
or
the
versions
that
we
use
would
sometimes
vary
and
the
more
foundations
we
brought
online,
the
greater
that
variance
became
made
it
very
hard
to
predictably
upgrade
or
update
things.
Also,
we
didn't
really
have
any
change
tracking
mechanism.
So
if
an
engineer
made
a
little
change
to
test
something
and
saved
it,
we
weren't
aware
of
that.
There
wasn't
necessarily
everyone
on
the
team
wasn't
as
fun.
The
same
page
or
stuff
was
not
noted
or
documentation
wasn't
updated.
G
As
you
can
see,
this
would
also
cause
problems
during
upgrades
to
foundations
should
have
the
same
settings,
but
don't
you
go
to
do
an
upgrade?
You
get
pretty
unpredictable
results.
This
resulted
in
lengthy
and
chaotic
upgrades.
It
would
take
a
couple
of
weeks
to
get
a
single
foundational
point
upgrade
very,
very
challenging
in
the
early
days
before
we
had
automation.
G
So
our
solution,
we
use
pivotal
platform,
automation
and
concourse
pipelines.
We've
stored
the
foundation
configurations
and
source
control,
all
of
those
documentation,
zarnow
living
documents,
as
in
whenever
we
go
to
use
them,
we're
updating
the
configuration
files
themselves
and
then
staging
the
changes
so
that
anything
that
is
in
those
config
is
actively
being
used.
We
don't
really
have
to
worry
about
them
going
out
of
date.
So
anytime
we
need
to
look
at
settings
or
previous
history.
Anything
along
those
lines.
We
can
look
in
source
control
and
see
what
exactly
took
place
there.
G
We
also
settled
on
no
manual
deployments.
Ideally,
everything
should
be
run
through
automation.
It's
reduces
errors,
really
copy/paste
problems,
those
sort
of
things,
and
it
also
provides
a
paper
trail.
If
you
will
for
any
changes
that
we
make,
this
also
required
a
bit
of
team
process,
people
were
used
to
doing
things
in
ops
manner,
directly
even
by
hand.
This
required
a
little
bit
of
a
mindset
change
to
make
that
happen,
but
through
team
agreements
we
managed
to
push
through
and
do
that.
This
helped
maintain
consistency.
G
We
also
have
a
peer
review
process
that
requires
at
least
a
minimum
of
two
engineers
to
look
at
any
changes
that
are
going
to
be
made
and
approve
them
before
we
actually
do
them.
This
helps,
you
know,
reduce
fatigue
when
you're,
making
lots
of
changes
helps
reduce
mistakes.
Those
sort
of
things
we
also
use
slack
to
make
a
little
bar
communications
for
these
things
and
I'll
show
an
example
of
that
in
a
little
bit.
G
This
is
our
pipeline
flow
overview,
so
I'll
be
going
through
this
a
little
bit
more
in
detail
later,
but
just
to
give
you
a
rough
idea,
each
one
of
these
is
for
an
individual
tile
and
the
entire
tile
process
from
will
check
for
pending
changes
will
perform.
The
initial
backup
will
stage.
Those
changes
will
then
perform
another
backup,
and
then
we
will
compare
those
backups
and
look
at
any
drifts
that
are
there.
G
This
helps
us
eliminate,
potentially
introducing
any
unwanted
variables
and
also
make
sure
that
we're
putting
the
right
stuff
in
this
is
what
we
use
for
our
slack
notifications
for
our
automation.
We
cover
stuff
such
as
job
starts,
succeeds
failures,
job
aborted,
pretty
much
any
state
that
you
can
get
in
Concours.
We've
got
a
cover
here,
an
example
to
the
right.
G
This
is
something
that
the
entire
team
keeps
an
eye
on
in
a
regular
basis,
so
we
know
pretty
much
what's
going
on
at
any
given
foundation
with
the
automation
just
by
looking
in
this
flat,
shell,
so
the
actual
process,
we
make
our
changes
locally,
so
we
create
a
branch
based
off
a
master.
It
allows
for
multiple
engineers
to
work
on
individual
foundations.
At
the
same
time,
once
shaders
have
been
prepared,
they
are
checked
into
source
control
and
pushed
upstream
by
having
configuration
changes
in
source
control.
G
G
G
G
Basically,
the
idea
is
if
somebody
makes
a
change
in
option
and
then
they
save
it,
but
they
don't
actually
apply
those
changes
or
in
the
event
that
an
apply
changes
failed
and
it
got
left
in
that
state.
This
will
check
each
product
in
opps
man
to
make
sure
there
are
no
pending
changes.
If
there
are,
it
will
hold
in
its
process
and
then
you
have
to
go
in
and
remediate
it
figure
out.
If
what
what
had
changed-
or
you
know
in
the
case,
revert
off
spin
and
then
move
forward.
G
G
This
is
where
the
tile
staging
happens,
configure
a
configuration
files
are
interpolated
and
output
for
use
later
in
the
pipeline.
In
short,
this
process
replaces
any
pipeline
VARs
and
a
tile
set
of
configuration
files.
The
tile
specified
in
the
version
config
is
examined
and
is
its
requirements
are
printed
and
console
for
future
reference.
That's
the
the
configure
product
portion
of
it,
it
looks
like
I
had
just
slightly
there.
We
also
go
through
and
upload
the
stem
cell,
we
upload
the
tile
and
then
finally,
we
configure
the
product.
G
So
the
tile
is
then
configured
with
the
insert
plate
configurations
from
one
of
the
previous
steps.
If
all
goes
well,
we
move
forward
onto
the
next
step.
If
there's
a
problem
with
the
staging
params,
it
is
reviewed,
updated
in
the
configure
in
the
config
and
then
the
job
is
kicked
off
again,
we'd
like
to
feel
forward
in
this
sense.
It's
very
easy
for
this
to
happen.
A
simple
typo
in
the
configuration
file
or
a
small
change,
or
we
forgot
to
add
a
pram
for
a
new
tiled
version,
something
along
those
lines.
G
G
G
This
is
where
we
do.
The
drift
validation
and
this
right
here
has
been
kind
of
the
key
to
our
success
we
go
through
and
once
this
process
is
done,
we
post
links
to
our
pending
changes,
but
I
have
a
separate
job
that
actually
runs
that
you
can
see
the
pending
changes.
So
we
know
exactly
what
we're
supposed
to
be
looking
at.
G
We
post
links
to
the
drift
which,
as
you
can
see
on
the
right-hand
side
here,
is
a
small
example
of
what
that
looks
like
and
then
there's
a
couple
of
the
jobs
that
we
run
before
we
kick
anything
off.
We
push
that
in
slack,
we
get
approvals
on
it.
We
review
them.
We
take
a
look
at
the
changes,
make
sure
this
is
what
we
want,
that
anything
didn't
slip
in
cetera
and
then
we
move
forward.
G
This
is
what
applying
changes
actually
looks
like
sure,
everyone's
familiar
with
the
Bosch
or
the
sorry.
The
option
apply
changes
screen.
This
is
how
we
communicate
stuff
in
slack,
once
we
kick
off
the
changes
we
post,
what
it's
doing,
and
then
we
update
and
update
the
emojis
accordingly.
So
that's
we
have
a
good
idea
of
where
we're
at
anybody
on
the
team
management.
Anybody
really
that
is
part
of
our
slack
channel
for
this
can
take
a
look
and
see
where
we
are
with
any
deployment
at
any
given
time.
G
So
as
a
reminder
before
automation,
we
had
consistency
issues,
we
had
challenges
tracking
any
sort
of
changes
or
updates
or
anything
along
those
lines,
and
it
was
lengthy
and
chaotic
upgrades.
Here's
where
we
are
today,
all
of
our
tiles
are
backed
by
configuration
files,
foundation,
consistency.
It
was
all
over
the
board.
Now
it's
at
an
all-time
high,
from
no
change
tracking
to
full
change,
tracking
verification
and
accountability.
G
Moving
from
one
foundation
single
point
upgrade
every
two
weeks
to
3
plus
multi-point
foundations
per
week,
and
what
I
mean
by
that
we
are
currently
in
the
process
of
upgrading
from
two
for
two
to
seven.
It
had
been
running
two
to
three
foundation
blitz
per
week
from
start
to
finish,
huge
difference
in
the
amount
of
time
that
it
takes
to
actually
get
upgrades
and
not
to
mention.
G
This
is
pretty
much
due
to
lack
of
failures
along
the
way
and
what
have
you
I'd
like
to
give
a
special
thanks
to
the
urge
and
JP
two
of
my
team
members.
All
of
them
worked
very
hard
on
the
initial
path
to
getting
us
rolling
with
automation,
upgrades
of
deployment
and
just
want
to
say
thank
you
guys,
I
really
appreciated.
It
also
I'd
like
to
give
an
additional
thanks
to
everyone
on
the
platform
team.
A
F
F
G
To
answer
your
question:
sorry,
we
did
have,
we
did
have
foundations
in
place
before
we
started
doing
the
automation.
That
is
correct.
However,
we
roll
out
any
new
foundations
with
using
platform
automation.
Oh
I,
this
presentation
was
summed
up.
There
is
actually
a
lot
of
other
jobs
that
I
have
running
on
platform.
Automation,
specifically
that
I
wasn't
really
able
to
cover
here
but
yeah.
We
are
using
it
and
also
those
foundations
that
we
initially
brought
online,
though
they
had
all
the
different
changes.
G
Those
got
folded
in
to
automation
and
now,
as
we've
gone
through,
several
upgrades
they've
all
consistently
updated
and
become
very
similar
as
we
as
we
move
forward
with
that.
So
the
there
were
only
real
difference
for
installing
a
new
foundation
versus
upgrading
foundation
is
one
job.
We
have
an
install
option
instead
of
an
upgrade
option
and
it
operates
very
similar.
Ok,.
G
F
G
So
the
way
we
handle
it
is
that
we
run
the
install
spend
job
which,
as
I
said,
it's
very
similar
there's
a
couple
of
things
you
have
to
setup.
First,
basically,
you
need
a
state
file
for
tracking
that
sort
of
thing.
What
version
you
want
to
run
got
to
make
sure
your
tile
for
that
and
all
that's
in
as3
repository
we're
with
the
rest
of
our
tiles.
Once
you
run
that
you
install
it
and
then
from
there
we
treat
it
just
like
a
regular
foundation
in
the
upgrade
process.
G
G
Also
go
through
and
we
have
a
template
for
each
foundation
that
we
have,
because
each
foundation
does
have
its
own
set
of
template
and
config
files.
We
have
a.
We
have
a
template
for
that.
We
just
copy
that
over
and
when
we
update
the
values
to
reflect.
What
we
want
for
that
foundation
and
it's
gonna
be
all
unique
things
like
IP
addresses.
You
know
main.
G
Yeah
I'm
happy
to
share
copy
of
the
slides,
and
so
the
other
thing
too
there.
There
is
a
lot
to
cover
and
I'm.
If
anyone
has
any
questions,
feel
free
to
reach
out
to
me
I'm
more
than
happy
to
assist
where
I
can
it's
it's
a
bit
of
an
undertaking?
If
you
are
not
familiar,
haven't
worked
on
it
before
so,
I
began.
B
G
A
B
F
G
G
It's
lots
of
oh
and
there's
a
there's
a
couple
of
things
under
the
hood.
That's
going
on
you'll
notice,
there's
three
slides
towards
the
end
that
talk
about
it,
we're
using
a
hierarchy
of
global
contexts
and
local
changes,
basically
I've
stripped
it
out
so
that
all
the
global
changes
that
are
ubiquitous
across
all
foundations
exist
at
the
global
level
and
then
you've
got
a
context
which
are
network
specific.
G
Certain
networks
have
certain
requirements
and
then
those
fall
there,
and
then
we
have
our
local
files
free
amyl
that
are
all
the
stuff
like
IP
data
source.
That
sort
of
thing.
Basically,
when
we
do
our
upgrades
once
everything
is
set
in
place,
you
don't
really
don't
touch
the
global
or
context
files
during
the
upgrade
you'll.
Do
it
once
for
the
initial
upgrade,
and
then
you
just
update
the
little
local
files,
something
about
effective.
There's
around
I,
don't
know
four
hundred
lines
in
your
standard
patch
config
file.
G
F
Oh
sorry,
go
ahead
doc!
Sorry,
especially
that
multi
foundation,
thing
I
mean
interested
in
as
far
as
I
remember,
the
PC
automation
tooling,
was
basically
designed
to
upgrade
a
single
site.
So
I'm
also
wondering
how
you
tackle
this,
the
relations
a
star
like
a
dependency
between
sites
or
is
that
each
site
is
updated
independently.
Well,.
G
So
the
way
we're
kind
of
handling
that
is
is
twofold.
So
I'm
the
concourse
side
of
things
we
have
a.
We
have
regional
deployments
for
all
of
our
major
regions
and
inside
of
those
regions
we
have
context
specific
foundation.
Appointments
so
we'll
have
will
basically
use
the
teams
in
concourse
to
manage
which
pipeline
is
runs
for
what
so.
We
have
a
our
staging
Foundation,
for
example,
a
CT
stage
and
TD
stage
of
two
just
for
examples.
Those
will
each
have
their
own
semen
inside
of
them.
Those
teams
will
have
pipeline
specifics
of
those
foundations.
G
B
The
t-mobile
team
for
for
proposing
these
great
talks
and
for
delivering
them
and
I'll
figure
out
with
you
guys
offline.
What
the
best
way
is
to
share
this
with
the
wider
organization.
Will
they
get
from
see
if
that
posters
up
or
something
I'm,
not
sure
where
we
keep
things
like
this,
but
maybe
we've
got
to
create
a
place
where
we
can
share
past
and
present
cab
presentations.
But
thank
you
very
much
for
that.
B
B
Okay,
well,
everyone
very
much
for
joining
joining
us
today,
thanks
to
everyone
Eric
for
presenting
the
apps
run
time
updates
thanks
chip
for
the
foundation
updates
and,
of
course,
thank
you
very
much
to
the
t-mobile
team
we're
presenting
today
and
making
this
a
really
really
information
packed
cab
meeting
thanks.
Everyone.