►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
one
important
thing
was,
you
know
to
understand:
is
that
UPS
has
a
long,
strategic
partnership
with
with
Red
Hat?
We,
you
know,
we've
been
working
with
ever
had
Enterprise
Linux
for
a
long
time,
and-
and
there
was
this
interesting
synergy
where
we
had
adoption
in
few
sources-
few
stack,
so
we
had
purchased
fuse
for
our
integration
problems
and
when
Red
had
purchased
fuse,
then
it
sort
of
came
together,
and
there
was
a
another
company
here.
A
You
know
CIC
deep
stack
so
when
it
came
with
together
with
Red
Hat.
There
was
this
like
synergy
where
openshift
was.
You
know
covering
some
of
that
same
functionality,
so
we
really
wanted
to
you
know
we
we're
familiar
with
that
stack
we've
visiting
a
lot
of
OpenShift.
You
know
community
events
and
things,
and
we
started
really
seeing
that
there
was
this
alignment
that
open
should
have
covered
some
of
that
same
functionality.
We
were
looking
for.
We
wanted
to.
A
A
Then,
as
we
we
started
using
open
shift.
We
we
saw
that
there
was
really
more
there,
there's
a
possibility
to
kind
of
create
what
we
think
is
a
private
cloud
and
and
provide
a
kind
of
road
map
or
a
runway.
That
gets
us
on
to
this
hybrid
cloud,
which
aligns
well
with
our
with
our
needs.
So
one
of
the
things
was,
you
know,
which
was
the
first
application
that
would
that
would
justify
that
investment
and
for
us
it
was
really
this.
This
edge
platform
and
an
application
call
type
so
I'll
turn
it
over
to
rich.
A
C
D
You
start
with
data
and
UPS
has
one
of
the
strongest
data
networks
in
the
world
today
we're
creating
new
ways
to
put
it
to
better
use.
An
example
is
edge,
a
suite
of
more
than
20
initiatives
and
development
that
will
work
in
unison
to
optimize
operations.
It
leverages
data
to
assign
tasks
and
minimize
driver
and
inside
employee
over
time
for
on-road
PM
sort
and
am
sort
operations
to
get
an
idea
of
how
edge
works.
Let's
visualize
the
PM
sort
at
night.
D
Our
operations
tackle
many
tasks,
some
simple
others
complex
with
data
we
collect
throughout
the
day,
a
central
computer
constructs
a
detailed
dynamic
operating
plan
that
breaks
down
assignments
further
into
tasks
for
a
variety
of
workgroups
to
perform.
One
of
those
is
the
sort
plan
in
near-real-time
edge,
analyzes
and
optimizes
staffing
resources
and
prioritizes
next
tasks.
Instructions
for
employees,
unloading
and
sorting
packages,
for
example,
before
vehicles
return
to
the
facility
edge,
begins
analyzing
data
and
prioritizes
which
cars
should
be
unloaded.
D
First,
if
others
arrive
with
higher
priority
packages,
it
modifies
the
sort
plan
and
issues
updated
tasks
wirelessly
to
unloaders,
and
the
management
team
edge
also
takes
advantage
of
data
to
develop
an
operating
plan.
This
plan
separates
tasks
among
employees,
balancing
workload
and
minimizing
overtime.
It's
particularly
critical
when
reviewing
package
exceptions
throughout
the
day.
Some
packages
are
not
delivered
for
various
reasons:
edge
analyzes
and
divides
the
entire
amount
of
work
among
available
resources.
D
As
new
exceptions
arrive,
the
workload
is
rebalanced
and
communicated
to
employees
in
real
time
through
optimization
edge,
minimizes
overtime,
balances,
workloads
and
improves
quality.
It's
this
dynamic
use
of
data
that
creates
greater
value
in
the
UPS
Network
data
feeds
everything
in
the
world
and
UPS
is
using
it
to
revolutionize
its
operations
network
driving,
cost
lower
and
margins
higher.
C
A
B
C
Presents
there
we
go
better,
there
we
go
so
a
type
is
one
of
the
biggest
initiatives
and
the
edge
program,
as
we
just
mentioned,
based
on
the
information
that
we
gather,
we
synthesize
information
from
over
40
systems.
Now
we
boost
the
speed
of
decision-making
within
our
operations.
It
also
feels
our
smart
logistics
network
strategy
because
of
her
open
shift
platform,
we
must
deliver
a
bit
as
business
needs
change
and
also
improves
overall
customer
satisfaction.
C
The
information
we
we
provide
to
our
supervisors
is
on
a
mobile
device
with
that
we
can
alert
the
supervisor
as
the
areas
needing
most
attention.
So
more
packages
are
one
area
of
the
building
versus
the
other
area
they
can
easily
with
the
information
we're
providing
with
this
platform
is
to
help
the
supervisor
make
those
decisions
to
get
our
package
delivery
through
the
system
as
quickly
as
possible
and
improve
customer
satisfaction
and
what
that
will
turn
over
to
to
Jignesh
to
talk
about
our
DevOps
transformation
journey.
B
Okay,
good
afternoon,
everybody
so
I
know
this
slide
is
a
little
busy
slide.
But
but
this
slide,
what
we
wanted
to
highlight
was
the
ecosystem
that
we
put
together,
which
outer
speech
or
CI
CD
goals,
leveraging
DevOps
practices
to
minimize
the
the
turnovers
that
happen
between
development
team
members
and
operations,
team
members
to
get
the
code
from
a
development
environment
to
production
environment,
with
the
ultimate
goal
of
delivering
the
software
to
meet
the
business
need
and
the
customers
needs
in
that
process
of
creating
that
ecosystem
and
changing
the
culture.
B
We
realized
that
it
was
very
important
to
have
a
platform
that
we
can
rely
on,
that
we
can
count
on
as
well
as
it
can
integrate
well
with
all
the
technologies
and
tools
listed
on
the
slide
there
and
sure
enough,
OpenShift
with
where
that
support
came
through
and
we
were
able
to
deliver
on
this
product
called
site
that
I
just
talked
about
with
that
I'll
turn
over
to
mark
and
he
can
walk
us
through
the
launch.
Lessons
learned,
while
bringing
our
open
shift
and
ups
yeah.
A
The
the
remaining
slides
are
basically
like
lessons
learned
one
of
the
what
the
main
things
was.
You
know
once
we
decided
that
we're
actually
going
to
build
out
openshift
there
was
a
long
vetting
process.
We
also
did
a
bake-off.
You
heard
some
very
similar
processes
which
were
done
at
at
other
large
companies.
It's
the
same
same
kind
of
thing.
A
Openshift
goes
GA
in
in
January
and
by
April
14th.
We
deliver
a
production
cluster
which
the
SCI
fabrication
team
ended
up
deploying
on
and
at
the
same
time
we
had
to
automate
it
right.
So
our
strategy
was
very
similar
to
what
others
are
doing,
which
is
putting
it
on
physical
infrastructure.
So
you
know
bare-metal
servers,
which
means
we
need
to
automate
the
deployments
of
bare-metal
servers,
which
is
a
scripting
automation
like
ansible
that
some
of
that
skill
we
didn't
really
have
in-house
at
the
moment.
A
We
did
a
very
similar
thing
where
we
broke
up
our
dev,
our
stress
and
our
development
clusters,
which
are
three
distinct
clusters,
but
across
those
causes
we
have
about
four
thousand
containers
running
production
has
fifteen
hundred
ish
across
two
data
centers,
so
we
were
able
to
achieve
at
least
from
a
deployment
perspective.
Some
some
very
high
rate
of
you
know
of
deployment,
and
the
next
thing
that
you
realize
is
that
you
know
it's
not
just
about
the
infrastructure,
and
you
heard
it
today.
A
It's
about
building
a
practice
right,
because
our
our
group,
you
know,
as
an
enterprise
team,
is
really
about
you
know,
had
to
had
to
achieve
some
transformation
in
the
organization.
We
have
the
same
kind
of
cultural
issues
that
other
large
organizations
deal
with.
We
wanted
to
enable
things
in
all
these
areas
like
new
architectures,
we
had
to
come
up
with
patterns
and
practices
around
Micra
services,
so
the
first
of
this
we
took
on
was
really
micro
services.
A
So
what
is
the
what's?
The
governance
problems
that
you
encounter
with
with
with
deploying
marker
services,
you
can't
do
micro
services
without
automation,
so
we
have
to
pick
a
stack,
that's
clear,
for
you
know
for
deploying
our
applications
quickly,
so
we
selected
Jenkins
with
then
had
then.
As
soon
as
you
start
building
Jenkins
pipelines,
the
question
becomes:
what's
our
testing
tools
and
and
how
do
we
enforce
policy?
A
So
we
we
piece
together
a
stack
of
tools
that
really
allowed
for
that
and
at
the
same
time,
when
necessary,
plugged
in
with
our
existing
tools.
So
a
lot
of
custom
code
ended
up
being
written
for
attaching
to
things
like
our
change
management
systems.
We
have
existing
change
management
tool
or
business
continuity
systems
right,
so
we
has
a
forward
traces
off
to
our
existing
continuity
BCC
systems.
A
You
know
what
were
our
targeted
training
groups
like
who
are
the
users
of
the
application
of
the
platform,
so
one
is
obviously
just
developers
who
are
developing
new
applications,
so
we
have
a
kind
of
tailored
training
just
for
them
which
focuses
a
lot
on
CI
CD,
the
nuts
and
bolts
of
the
underlying
platform,
but
then
targeted
training
for
infrastructure
and
infrastructure
platform
teams.
So
they,
if,
if
I'm
a
team
that
owns
an
enterprise
deployment
of
a
database,
what
is
what
does
OpenShift
mean
to
them?
And
it's
it's
different.
A
Their
view
of
the
infrastructure
is
different.
They
may
not
be
using
the
same
exact.
You
know
deployment
approaches
that
that
they'll
still
have
a
pipeline
which
drives
their
deployments,
but
they
may
not
be
they
may
be
doing.
Docker
builds
directly.
Not
maybe
s2i
builds
right,
so
we
had
to
target
training
and
really
put
that
all
together,
so
that
we
could,
you
know,
have
a
good
foundation
for
transforming
the
organization.
A
This
is
from
a
go-to
conference
in
Copenhagen
there's,
it's
kind
of
an
adapted
model
maturity
model,
which
we
added
some
things
that
the
their
model
does
not
include
things
like
governance,
which
is
important
at
larger
organizations,
but
you
know,
and
we
adapted
it
and
started
really
using
that
to
measure
ourselves.
So
we
asked
you
know:
where
do
we
start?
You
know
which?
Which
of
these
tracks
are
really?
You
know
that
we're
behind
on
and
and
then
what
can
we
achieve
in
the
next
year?
Right?
A
A
Another
thing
we
encountered
was
just
size
and
scale,
so
everything
we
do
at
UPS
has
to
has
to
scale
and
one
of
the
limitations
we
found
was
kind
of
on
the
ingress
router
side.
So,
as
you
start
throwing
just
huge
amounts
of
aja
to
be
traffic
and
at
the
cluster
you
find
that
you,
you
know
the
at
least
the
basic
set
up.
A
You
know
there's
kind
of
a
basic
five
server
or
five
node
setup
that
has
at
least
two
like
infra
nodes
that
run
H
a
proxy
routers
and
those
they
just
will
not
scale
once
you
really
throw
large
amounts
of
volume
at
them,
and
that
means
adding
multiple
instances.
The
H,
a
proxy
proxy
having
you
know,
assigning
different
ports,
so
they
can
live
on
the
same
infrastructure
than
having
and
a
hardware
load
balancer
in
front
that
really
load
balances
traffic
across
both
in
four
nodes.
A
Another
thing
we
found
was
that
once
you
set
that
up,
if
you're
gonna
do
docker
pushes
into
that
same
infrastructure,
if
it's,
this
is
your
test
environment,
which
may
not
be
necessary
for
test.
But
if
you're
gonna
do
docker
pushes
directly
and
then
the
ports
you
open
better,
be
also
opened
on
the
H,
a
router
on
the
hardware
load
balancer,
because
darker
requires
ports
open
on
both
all
the
way
through
just
small
things
we
encountered.
A
So
what's
your
for
a
given
transaction
per
second,
how
much
CPU
and
memory
you're
gonna
need
in
terms
of
mil,
of
course,
and
and
memory
and
then
and
then
come
up
with
a
model
where
you
kind
of
you
know,
build
the
normal
distribution
and
say
for
90%
confidence.
Given
these
best
and
worst
cases,
I
will
need
this
many
servers,
you
know
to
deal
with
production
and
that's
that's
what
we've
actually
got.
We
took
some
time,
but
this
was.
This
is
a
maturing
process.
A
A
Login
bill
deploy,
verify
promote
right,
and
it
really
is
in
your
best
interest
to
capture
that
in
a
shared
code
base
that
you
can
give
back
to
application
teams
and
give
them
something
that
is
at
least
a
basic
set
up,
and
for
us
we
it's
a
little
more
than
basic,
because
we
took
all
of
get
flow
and
the
entire
branching
strategy
involved
in
Beefalo
and
really
captured.
All
of
that
in
a
pipeline.
A
We
did
some
very
clever
things
like
for
for
things
like
feature
branches,
we
spawn
new
infrastructure.
We
create
open
ship
projects,
dynamically
based
off
check-ins
to
feature
branches
right
things
like
that,
but
you're
only
possible
an
open
shift
in
this
infrastructure.
So
you
can
either
build
your
own
or
wait
for
ours
because
we're
trying
to
open-source
ours
we're
going
through
our
own
little
legal
review
process
to
get
to
get
our
codebase
out.
A
Other
thing
was
Jenkins,
so
tuning
Jenkins
for
scale
right.
You
can
either
distribute
Jenkins,
which
we've
heard
a
couple
of
application
teams.
You
know
a
couple
companies
doing
that,
where
they're
kind
of
distributing
Jenkins
masters
allowing
app
teens
to
spin
up
their
own
for
us,
we
started
four,
at
least
from
the
perspective
of
let's
build
out
a
centralized,
Jenkins
infrastructure
which
we
already
had.
Let's
expand
it
out,
but
it
has
to
scale
now.
So
how?
What
do
we
do?
We
use
the
kubernetes
plugin
start.
A
Creating
build
agents
in
openshift,
allow
their
Jenkins
masters
to
offload
build
agents
into
openshift
and
then
tune
the
Jenkins
master
to
aggressively
spin
up
Jenkins,
build
agents,
and
these
settings
are
specific
settings
we
encountered.
That
was
two
days
of
digging
through
the
Jenkins
codebase
before
we
figured
that
out.
A
So
the
future
I
mean
we've
had
some
good
success
with
OpenShift
right
or
we've
really
had
we
now
air
in
a
spot
where
application
teams
are
coming
to
us
we've
issued,
you
know,
we've
done
some
of
the
early
transformation
work.
There
is
this.
You
know
drive
now
from
application
teams
around
the
organization
that
they
know
their
future
is
to
build,
especially
if
they're
building
anything
that
runs
on
Linux.
A
You
know
their
futures
to
come,
build
on
open
shifts.
We
really
think
open
shift
will
start
to
consume
most
of
the
Linux
workload.
That's
in
our
database
in
our
data
centers.
So
we
want
to
take
three
nine
build
out
a
three
nine
cluster
expand
our
workloads.
So
that
was
one
thing
that
we
really
didn't
do
at
that
time,
because
we,
you
know,
building
persistent
workloads
and
stateful
sets
from
our
perspective,
is
you
really
need
container
native
storage,
for
otherwise
the
management
just
becomes
out
of
control?
A
We
started
playing
with
it
by
you,
know:
building
NFS
mounts
and
trying
to
manually
deal
with
that
stuff,
and
it's
just
it
will
not
scale.
When
you
know
at
the
level
we
want
to
do,
I'm
gonna
take
advantage
of
some
of
those
metrics
and
monitoring
features
and
the
open
service
broker
API.
That's
an
open
shift
3:9
and
really
give
our
developers
that
full
stack
automation
like
once.