►
From YouTube: OpenShift at Inmarsat
Description
Kevin Crocker from Inmarsat discusses their production deployment of OpenShift at the OpenShift Commons Gathering Boston on May 1, 2017.
Learn more and see the slides here: https://blog.openshift.com/openshift-commons-gathering-at-red-hat-summit-2017-video-recap-with-slides/
A
We
all
have
the
same
sort
of
common
goals,
the
same
challenges,
an
actual
fact
that
particular
took
a
slide
for
Mary
every
person
there
put
it
together
and
present
it,
although
you
wouldn't
got
the
image
that
story
about
that,
but
the
values
and
their
challenges
and
why
we
chose
openshift
and
the
value
we
were
getting
out
of
it
has
come
along.
So
the
plan
here
was
little
intro
myself,
I'm
Kevin
I'm,
actually,
a
manager
of
the
new
center
of
excellence
built
around
an
integration
inoperability
whereby
open
ships
now
a
big
part
of
that
I.
A
Don't
have
my
technical
team
to
support
me.
So
any
questions
from
the
technical
nature,
like
the
hard
press
to
ask
I,
do
come
from
a
technical
background
button
I'm
nowhere
near
as
technical
as
the
guys
on
my
team
who
was
Emer
set
and
what
we
were
looking
for,
the
why
we
chose
a
dedicated
environment
of
open
shift,
what
our
challenges
and
lessons
learned
and
what
some
of
our
next
steps
in
the
next
few
years
so
who
here
knows
the
Emer
set
I've
heard
of
them,
is
that
okay,
great
we've
got
a
few.
A
Do
you
know
what
we
actually
do?
We've
been
in
the
news?
What
do
you
know
what
we
actually
do?
So
that's
what
I
want
to
do
a
little
bit
here.
This
is
our
head
office.
Is
our
mothership
located
in
London
in
the
UK
I'm,
actually
from
a
satellite
office,
pun
intended
there
and
st.
John's
Newfoundland
and
so
at
a
very
short
trip.
Other
people
here,
I
know
came
from
Sydney
and
and
France,
and
all
over
the
place
is
amazing.
The
Google
people
that
came
here
so
who
has
ever
set.
A
We
were
actually
founded
back
in
1979
by
the
International
Maritime
Organization,
basically
to
enable
ships
to
stay
in
contact
with
the
shore
and
the
call
for
emergency
help,
and
we
originated
from
the
international
maritime
satellite
organisation.
Thus,
in
Meyer
set
we've
got
a
37
year
track
record
and
providing
connectivity
to
customers
on
the
move
on
YouTube.
A
You
can
find
a
very
nice
actually
video,
covering
34
years
of
our
history,
of
what
we've
been
a
part
of
throughout
that
it
was
too
long
to
show
here
would
actually
work
to
put
half
of
my
presentation,
but
I
do
have
a
video
to
show
you
about
what
we
actually
do
and
I
promise
it
does
play
into
what
we're
trying
to
achieve.
Today,
though,
we're
still
in
the
maritime
but
many
different
sectors
as
well.
Governments
Airlines
the
broadcast
media,
oil
and
gas
mining
and
construction
minute
area
agencies
providing
global
coverage.
A
Typically
we're
terrestrial
telecom
networks
and
Anwar
are
unreliable
or
we
simply
cannot
reach.
Therefore,
with
our
satellites
and
the
constellations,
we
can
do
a
global
coverage
and
we
do
that
by
having
the
range
of
satellites
we
as
several
constellations
out
there
now
they're
fully
optimized
with
a
ground
infrastructure
network
and
we
partner
with
a
lot
of
market
market
leading
distribution
partners,
so
I'm
going
to
play
the
little
video
which
will
actually
show
you
more
of
what
we
do.
A
B
Miles
away
in
the
silence
of
space,
Inmarsat
is
in
touch
making
millions
of
connections
every
hour,
keeping
the
audio
and
data
flowing
every
tool
of
the
glory.
Our
satellites
are
connecting
the
disconnected
we're.
Definitely,
we
need
enabling
communication
in
a
fast-paced
world,
saving
time
money
and
transforming
lives,
making
the
difference
every
single
day.
We
keep
people
in
touch
in
many
different
ways,
keeping
those
in
remote
places
in
touch
with
a
voice
from
home,
keeping
everyone
talking
when
other
systems
have
failed.
B
In
fact,
through
the
services
we
will
all
rely
on
us
to
do
their
job.
You
can't
arch
our
connectivity,
but
it
touches
all
of
us
and
like
never
before
people
expected
to
keep
them
connected
in
the
most
unusual
locations,
to
help
them
use
information
to
discover
opportunities
and
innovate
solutions
to
make
sure
breaking
rooms
from
anywhere
on
other
transmits
to
everywhere
in
morons
expertise
and
education.
Servers
with
little
have
a
chance
for
more.
A
Well,
I
couldn't
say
any
better
than
that,
but
there
is
a
big
analogy
here,
and
this
is
why
I
really
plays
into
the
in
into
what
we're
trying
to
get
across
and
what
we
are
looking
for.
Look
at
the
picture
there
and
they
talked
about
connections
and
millions
of
connections
every
day
and
communicating
across
the
world
at
different
systems
connected
with
exist
them
and
as
I
was
putting
this
together
as
I
was
thinking
of
what
were
we
trying
to
achieve
and
I
thought
about.
A
We
are
looking
to
do
a
immerse
at
four
immerse
at
so
you
think
about
that
for
a
minute,
and
what
we're
trying
to
do
with
a
business
case
is
really
giddy.
Enterprise
service,
bus,
we're
about
integrations
or
our
internal
systems
or
back-office
isms
need
to
talk
to
each
other.
Our
partners
that
we
deal
with
are
actually
going
up
there
back
out
the
systems
to
our
back
office
systems.
So
we're
done
with
all
these
connections
and
now,
if
you
think
about
it,
where
a
lot
of
people
were
trying
to
move,
we
had
to
transform
quicker.
A
Someone
said
they're
being
fast
in
the
space.
We
want
to
be
more
agile
and
the
DevOps
and
the
CI
CD
we
we.
We
are
actually
looking
at
the
cloud
as
well
to
help
us
there
and
so
here's
another
little
analogy
where
we
can
see.
Red
Hat
almost
like
in
our
launch
pad
with
openshift
being
our
satellites
they're
able
to
syntax
connect
to
everything
that
we
need
to
connect
it
as
well.
So
there's
sort
of
the
analogy
so
immersed
at
you
can
think
37
years
we
have
a
lot
of
models
of
the
systems
in
place.
A
We
have
a
lot
of
technical
debt.
We
need
to
change.
We
really
didn't
have
a
core
integration
layer,
that's
at
an
enterprise
that
will
support
all
those
integrations
of
other
companies
that
have
come
into
the
Marzette
family.
Each
of
those
have
had
their
own
systems
as
well,
and
a
significant
portion
of
our
legacy.
Application
architecture
is
integrated
to
a
point
models.
When
integration
aspect,
the
point-to-point
is
very
hard
to
maintain.
We
can't
change
it
very
easily.
It's
not
scalable
and
it's
this
big
mesh
of
everything.
So
we
it's
very
hard
to
move
forward.
A
Our
objectives
in
this
case
is
really
to
simplify
the
architecture,
and
so
our
architecture
team
is
looking
at
this.
We
really
need
to
reduce
those
integration
activities.
We
wanted
to
provide
a
flexibility
and
scalability
to
meet
the
challenging
business
needs.
You
can
think
about
all
those
sectors.
A
What
we're
looking
to
be
an
enabler,
we're
open
in
our
system,
so
other
people
can
build
their
own
applications
on
our
infrastructure
and
which
so
we
need
these
sort
of
web
services
and
a
strategy
for
that
as
well.
So
the
strategy
is
focused
on
the
future.
The
core
that
is
an
ESB
is
a
fundamental
fundamental
component
of
that
and
and
we
need
to
adopt
newer
architectural
models
than
we
have
been
used
to
and
that's
your
own
API
is
micro
service
communitisation.
A
All
those
things
that
we've
heard
through
all
the
different
stories
here
today
and
with
that.
The
hope
is
that
we
get
a
very
more
simple
picture
with
an
ESB
in
the
middle
clean.
We
can
change
a
scalable,
its
robust,
so
the
decision
we
didn't
take
very
lightly
and
we
don't
take
things
very
lately,
which
is
our
customer
base
and
you
think
about
the
connectivity
in
the
places
we
are.
If
our
connectivity
goes
down,
lives
are
at
stake
and
our
CEO
actually
just
presented
to
us
in
st.
John's
was
their
pet
Friday.
A
He
made
a
sort
of
a
comment:
I
believe
it
was
every
day
a
Marzette
stays
at
least
five
lives
right.
So
it's
very
important
that
we
have
these
things
up
and
running,
and
we
had
we
had
that
we
had
to
do
a
lot
of
due
diligence
on
what
the
core
of
where
we
want
to
change,
and
so
we
came
up
with
an
exhaustive
list,
extensive
list
of
100-plus
functional
and
non-functional,
and
we
did
a
lot
of
research.
We
everyone's
gone
through
the
pocs.
We
had
requests
for
proposals
and
we
took
months
many
more
months.
A
We
we
usually
take
to
make
a
decision
some
of
those
criteria
again.
All
the
people
that
talked
here
today
is
about
flexibility
and
scalability.
If
you
want
resilience
to
avoid
avoid
that
single
point
of
failure,
we
want
a
developer
to
capabilities
to
our
developer
tools
and
developer
developers,
had
flexibility,
there's,
actually
different
deployment
options,
high
availability,
infrastructure
models,
load,
balancing
capabilities
and
some
service
deployment
options
that
gave
us
the
capability
to
distribute
components,
formation,
formation
of
a
logical
bus
management
of
messages
in
the
isolation
of
components.
A
So
after
many
months
and
actually
getting
a
third
party
in
as
well
to
not
only
evaluate
the
technical
aspects
with
actually
the
company's
going
to
provide
that
to
us.
Where
that
came
up
on
top-
and
that
was
based
on
the
technical
or
non-technical
and
the
commercial
analysis
of
all
all
that
stuff,
and
to
do
that
from
an
integration
point
of
view,
we
decided,
at
the
end
of
the
day,
to
go
with
fuse
integration
services
to
meet
our
integration
needs,
and
that
was
on
the
openshift
dedicated
platform.
Using
this
fuse.
A
This
is
fusing
the
gerson
services
so
why
openshift
dedicated,
because
we
needed
to
focus
on
the
services
what's
happening,
is
that
we
have
many
transformations
going
on
and
they
were
all
waiting
for
a
integration
layer,
and
so
we
need
to
get
up
and
running
quickly,
and
so
the
team
that
we
are
putting
together
wanted
to
focus
on
those
services.
We
need
to
focus
on
the
governance,
so
how
we're
going
to
use
openshift
all
the
different
changes
that
are
going
to
happen
with
the
new.
A
You
know:
agile,
sort
of
things,
the
ICD
getting
embedded
and
all
those
other
changes
that
would
happen
in
our
company
and
so
with
open
shifts.
The
architecture
group
exercise
looking
at
open
source
projects
as
well,
and
we
want
to
make
sure
that
they're
very
stable.
We
we
found
that,
with
over
shift
and
red
hat,
that
the
open
shift
components
that
support
the
new
architectural
models
are
typically
available
faster.
A
So
all
the
other
vendors
that
we
looked
at
and
seeing
some
of
the
things
in
the
open
source
market
coming
down
through
Red
Hat,
was
making
them
more
available
quick
available
quicker
and
be
able
to
support
those.
So
we
didn't
have
to
worry
about
checking
it
out
and
all
those
sort
of
things
redheads
cutters,
sort
of
doing
that
for
us.
So
we
used
you
know
their
templates
and
because
the
platform
was
open
standard,
there
was
no
vendor
lock-in
as
well.
For
us
we're
not
locking
into
anybody.
A
In
particular,
there
was
a
strong
support
for
premise
and
cloud
and
hybrid
cloud
deployments,
and
that's
again
through
the
transformation
looking
at
cloud
first,
but
some
things.
We
need
to
keep
on
premise,
other
things
you
might
have
in
hybrid
and
other
ones.
We
just
so
I've
moved
to
the
cloud
and
it
supported
different
deployment
models
for
distributed
hosting
model
with
no
single
point
of
failure.
A
Overshift
dedicated
is
what
we
went
went
for.
At
the
end
of
the
day,
we
were
looking
at
fabric
with
your
enterprise,
the
enterprise
solution
on
pram
and
I
think
this
is
in
both,
but
the
numerous
connectors
are
available,
but
we
also
had
the
ability
to
write
around
the
OpenShift.
Dedicated
platform
is
a
platform
based
on
the
industry,
standard
docker
containers.
A
So
with
that
the
we
still
had
challenges,
even
though
you
know
we
had
a
good
supporting
partner
with
us.
There
was
still
some
some
challenges
as
we
went
forward.
We're
really
new
into
this
as
well,
but
over
a
year
when
we
first
started,
it's
only
been
about
a
half
a
year
that
we've
been
actually
developing
and
getting
our
process
together
and
actually
deploying
things
to
the
platform
and
the
biggest
one
for
us
is
right.
Now
is
capacity
management.
A
We
had
some
assumptions
that
stability
was
going
to
be
guaranteed
out
of
the
box,
just
assumption
we
were
making,
because
we
had
chosen
Fizz.
We
assumed
that
we
would
use
that
for
all
our
integration
services
and
that
the
footprint
our
services
will
be
greatly
diminished
with
the
container
model.
These
were
some
assumptions
that
we
made
as
a
development
team
going
forward,
but
some
challenges
have
we
got
going
with.
That
was
how
do
we
properly
test
and
tune
resource
configurations
for
optimization?
A
We
quickly
got
into
the
into
a
situation
where
we
weren't
fine-tuning
those
or
actually
configuring
the
containers
themselves
and
we
were
starting
to
see
pods
recycled
and
we
weren't
sure
what
was
going
on
team
dug
into
that,
and
we
did
find
out
that
paws
aren't
really
isolated
in
this
case
and
they
can
impact
impact
each
other
if
we
don't
constrain
the
physical
Hardware
on
that
node.
Now
what
we
were
doing
was
not
sitting
defaults,
a
lot
of
the
stuff
with
javis
and
the
JVM
would
would
automatically
want
to
go
up
and
use
at
least
25%.
A
A
A
So
our
developers
are
fine-tuning
their
own
code
setting
some
of
those
resource
limitations
in
our
Alps
people
are
taking
those
looking
at
what
the
full
capacity
is
on
our
dedicated
platform
to
ensure
that
when
we
do
deploy
a
new
servers
out
there
we're
not
going
to
hit
max,
and
so
we
can
go
and
then
work
with
red
hat.
To
increase
the
the
nodes
and
the
dedicated
platform
so
and
the
other
lesson
is
using
a
micro
service
model
can
be
centralized,
allows
decisions
regarding
programming,
languages
frameworks
and
it
allowed
developers
to
experiment
experiment
more.
A
So,
even
with
all
these
challenges
we
find
other
developers
can
actually
play
with
a
lot
more
stuff.
They
have
their
own
namespace
in
their
dedicated
environment
and
they
can
just
play
away.
Actually
in
this
movement,
we've
moved
everything
to
the
cloud
all
of
our
desktops
as
a
service,
and
we
had
developer
machines
as
totally
in
the
cloud
on
Amazon,
that's
connecting
to
the
openshift
dedicated
platforms
and
all
of
our
CI
CD
servers.
The
whole
works
is
all
cloud
bases.
We
have
nothing
on
Prem
in
our
integration,
layer
and
supporter
integration
layer.
A
The
other
thing
as
well
is
very
important
that,
even
though
technically
Allah
technology
is
great,
it's
not
going
to
replace
a
well-defined
and
working
team
structure.
Probably
a
fund
in
an
open
shift
is
very
receptive
to
the
DevOps
model,
and
so
it's
becoming
very
easy
to
get
government's
around
that
and
to
prove
our
team.
The
second
biggest
one.
From
my
perspective
on
the
open
chef,
dedicated
is
around
upgrades
and
I
heard
other
people
say
about
their
upgrades
and
think
once
about
their.
It
was
not
recipes.
A
It
was
something
else,
don't
trust
whatever,
but
the
assumption
we
were
making
as
well
as
the
opens
OpenShift
dedicated,
would
just
work
and
upgrades
will
not
be
something
we
had
to
worry
about.
Red
Hat
plans,
those
they'll
upgrade.
We
wouldn't
have
to
worry
about
that.
Everything
will
continue
working,
our
paws
will
stay
running
and
all
those
sort
of
things.
A
The
other
challenge
that
other
challenges
we
have
with
that,
as
well
as
educating
others
about
that
platform.
So
not
everyone
understands
what
a
container-based
the
platform
is
about
in
a
company
like
a
Merce
at
where
we
do
have
the
silos
of
network
and
a
testing
group
and
other
groups
and
security,
it
became
very
difficult
to
let
them
understand
what
this
new
platform
is
and
I
think
it
was
a
Joe
earlier
say
the
three
things,
but
the
transformation
is
going
on.
A
I
think
there's
also
a
cultural
change
within
companies
when
you
start
to
adopt
these
new
technologies
as
well,
and
every
group,
not
just
the
development
group
and
architects,
need
to
be
educated.
The
other
challenge
as
well.
We
are
very
risk
adverse.
We
want
to
know
exactly.
What's
going
on
just
plan,
an
upgrade
in
a
small
window
with
the
least
disruption
has
been
is
very
important
to
us
and
we
are
stakeholders.
My
customers
in
that
aren't
happy
when
I
go
to
tell
them
that
the
upgrade
is
going
to
happen
sometime
this
week.
A
They
want
to
know
down
to
the
day,
and
we
have
worked
with
red
hat
very
closely
to
actually
get
some
better
communications
out.
So
let
us
know
24
hours
in
advance
that
they're
going
to
upgrade
our
platform
an
hour
in
advance
and
when
they
start
and
then
when
they
finish
as
well.
So
we
are
global.
We
have
several
teams,
not
only
my
team.
We
do
governance
and
gigas
for
some
outsource
teams
as
well,
and
they
work
24/7
with
the
time
zones
and
everything,
so
they
may
actually
be
doing
testing
on
a
weekend.
A
That
was
some
heavy
load
and
want
to
get
some
metrics
of
where
they
are
doing
an
upgrade
and
the
positive,
restarting
it
non-void
some
of
their
testing.
So
we
really
had
to
plan
some
windows.
So
the
lesson
on
that
is,
we
had
to
plan
those
upgrades
like
in
the
other,
like
we
did
before
in
the
past
and
work
closer
with
with
red
hat
to
plan
those.
A
Thirdly,
the
non-functional
requirements.
We
actually
have
the
assumption
that
monitoring
and
logging
would
be
out
of
the
box
as
well
and
I've
heard
from
many
people
today,
they're
using
different
open-source
products
to
do
their
monitoring
and
logging,
and
it's
very
important
that
we
do
that
and
we're
finding
that
very
quickly.
Now
from
our
support
group
and
our
ops
groups
that
they
want
those
sort
of
capabilities
on
the
dedicated
platform,
we
can
log
into
the
portal
and
look
at
one
pod.
A
But
some
of
many
of
our
services
are
days
running
for
some
of
the
programs
that
we
have
and
we
want
to
see
the
health
of
the
different
pods
as
we
work
together
and
it's
very
difficult
right
now
in
the
dedicated
platform
for
I'm
sure
we
could
go
out
and
start
looking
at
the
other
open-source
things
and
that's
another
benefit
for
being
here
as
well.
Listen
to
the
stories
now
we
have
some
ideas
to
go
and
do
that,
so
our
children
have
been
getting
to
that
centralized
logging
on
your
dedicated
platform.
A
It
has
common
their
newest
releases,
but
we're
having
some
difficult
actually
given
networking
achieving
the
common
Monacan
capability
to
see
the
health
and
service
I,
just
sort
of
explained
that
one
and
our
developers,
because
we
really
haven't
formed
the
devops
sort
of
the
team
topology
our
developers-
are
actually
performing
a
lot
of
those
administrative
support
and
tasks
where
we
have
everything
up
in
the
cloud.
We
have
our
workstations
and
they
have
connections
into
the
dedicated
platform
our
developers
are
going
in
and
actually
creating
the
accounts
and
things
of
that.
So
these
things
can
connect.
A
So
the
lesson
here
is
well
for
anyone
who's
starting
new
to
do
in
the
rooms
looking
at
that
understand
the
differences
between
the
offerings.
So
what
is
the
difference
between
OpenShift
dedicated
and
what?
In
the
overshift
container
platform,
which
I
believe
was
called
Enterprise
before
we
understood
some
of
those,
we
had
forgotten
some
of
those
we
have
been
over
a
year.
A
Some
of
those
things
we
knew
didn't
come
right
away
with
dedicated
that
you
could
have
got
with
the
open
shift
container
platform,
but
so
very
it's
very
important
to
understand
those
differences
and
again
we
chose
the
dedicated
because
we
did
not
have
the
want
to
worry
about
getting
infrastructure
in
place
installing
that
software
we
can
just
worry
about
actually
developing
our
services.
In
general,
we
had
no
assumptions
here.
We
knew
the
change
on
the
team
was
going
to
be
difficult.
A
There
was
a
different
way
of
mind:
mind
shift
change
to
treat
the
environment
as
containers
and
not
as
Enterprise
application
servers.
There's
also
a
mindset
that
we
should
share
out
the
communication,
certificates
and
libraries
thinking
this
on
our
old
ways
of
doing,
rather
than
thinking
of
what
a
really
a
container
is
and
making
it
independent
and
down
to
the
testing
of
containers
and
Environment
Management.
A
Our
testing
group
and
the
testing
people
on
our
teams
now
are
coming
around
to
some
aspects
and
differences
of
what
we
need
to
do
with
testing
the
one,
our
biggest
challenges,
because
in
the
integration
layers
as
well
as
we
don't
always
have
access
to
upstream
or
downstream
systems.
So
how
do
we
test
that
as
well?
A
So
the
lesson
here
from
a
general
sort
of
thing
and
I
mentioned
this
a
few
minutes
ago,
the
education,
awareness
across
all
groups,
and
not
just
the
architecture
and
develop
a
team,
has
become
very
important.
So
they
understand
what
is
happening.
Don't
need
to
know
the
real
technical
details,
but
they
need
to
understand
some
subtle
differences
in
how
we
do
things.
So
the
feedback
from
my
team,
more
of
a
technical
side,
is
that
they
did
have
a
steep
learning
curve.
A
They
came
from
a
dotnet
sort
of
world
we
played
around
with
C
ICD,
so
they
had
to
learn
a
whole
new
platform.
They
hadn't
learned
a
whole
new
thing
on
deals:
the
ICD
practices.
We
did
have
a
suffered
manager
on
board
from
my
old
team
and
we
were
trying
to
embed
that,
but
it
never
really
got
any
traction
and
yeah.
They
all
had
to
learn.
Java
know
them
knew
any
any
Java.
If
we
found
there
was
limited
syntax
up
in
the
recruiting,
yeah,
moans
and
Jason
files
for
creating
the
containers
and
pod.
A
So
little
things
like
that,
they're
coming
and
saying
can
I
go
and
ask
the
people
while
I'm
down
there
that
what
they
use
they
find
her.
Some
absence
of
acceptable,
best
practices,
guidelines
for
the
use
of
openshift
and
I.
Think
that
would
be
helpful.
The
CLL
documentation
we
find
sometimes
not
complete,
or
sometimes
the
correctness
of
it
is
questionable.
A
Maybe
we're
looking
at
old
documentation,
but,
as
is
just
feedback
from
my
team,
limited
validation,
a
report
from
a
platform
upgrade
so
like
I
was
alluding
to
with
we
would
get
an
upgrade
and
we
would
be
left
to
find
what
the
validate
you
know.
We
was
validated
and
had
to
raise
support
tickets.
There's
a
one
case
that
we
had
an
upgrade.
A
We
couldn't
even
log
into
the
portal
after
and
we
had
to
raise
the
support
case
of
that,
so
we're
working
with
red
head
on
those
sort
of
things
as
well,
and
the
support
processes,
that's
just
internal
to
Emerson.
It's
changing
our
whole
support
model
of
the
application
management
to
our
ops
groups
to
even
to
our
development
works
in
who
owns
what
in
a
DevOps
sort
of
culture
as
well.
We
have
a
conference
a
lot
in
the
six
months,
I
will
say
once
decisions
were
made
and
we
got
going.
A
We're
looking
at
probably
over
300
plus
services
by
the
end
of
this
year,
and
that's
it
we're
just
out
of
one
program
alone.
So
we're
looking
at
even
more
than
that,
and
a
lot
of
these
are
around
called
data
records
that
we
move
around.
Our
customers
need
these
as
well,
so
even
the
load
and
me
the
amount
of
messages
coming
through
is
sometimes
very
important
to
us
as
you
go
forward.
A
Our
next
steps
continuing
to
build
like
a
mer
set,
the
a
mer
set
parameter,
sets
contain
the
bill
as
well,
and
this
is
our
latest
satellites,
two
months,
I
think
within
the
week
at
a
cape
canaveral,
and
there
are
some
videos
online,
two
of
those
launches
and
I'm
sure
we'll
be
all
on
in
our
sights
watching
the
launch
as
well.
So
it's
pretty
exciting
time
for
a
mer
set
when
we
actually
do
launch
another
satellite.
A
So
the
over
set
dedicated
it's
going
to
help
us
keep
launching
forward
we're
going
to
look
at
better
visibility,
system,
performance
and
capacity
for
improve
planning.
I've
heard
that
from
several
people
and
overheard,
some
other
people
talking
to
people
who
look
at
system
performance,
they
evaluate
other
container
and
micro
service
languages
to
see
if
we
can
get
a
better
smaller
footprint.
A
Delta,
Netflix
I've
heard
many
people
here
today
as
well
saying
that
they're
already
dealing
so
we're
going
beyond
the
integration
layer,
so
people
outside
of
the
center
of
excellence
on
a
part
of
will
actually
be
using
the
dedicated
platform
as
we
move
forward
and
that's
it,
he
did
I
do
good
on
time.
You.
C
A
A
We
have
a
steady
cycle
of
loads
to
what
we
have
in
the
on
the
platform
today.
So
our
main
focus
right
now
is
in
our
billing
transformation
and,
of
course,
as
a
telecom,
we
do
Bill
runs,
and
so
we
do
see
a
high.
A
peak
load
come
around
the
end
of
the
month,
beginning
of
the
month,
our
customers
as
well.
If
you
look
at
their
traffic,
they
cycle
up
and
down
throughout
the
week
as
we
bring
more
provisioning
systems
down,
you'll
see
things
you
know,
have
your
load
in.
B
A
Month
and
during
times
of
special
events,
that's
on
the
go,
so
are
the
media
different
worldwide
events
are
going
on,
you
will
actually
see
more
Sims
mobile
units
being
provisioned,
and
so,
as
you
bring
provisioning
interfacing
into
this
platform,
we'll
start
to
see
those
sort
of
things
as
well.
So
there
are
heavy
loads
of
a
cycle
and
I
have
a
feeling
as
we
grow
more
though
the
low
will
actually
steadily
increase
and
peak
every
now
and
then,
as
we
go
forward.
A
They
were
dotnet,
so
I
came
from
a
Java
world.
Actually
that's
my
technical
background
and
when
I
joined
the
satellite
company
and
I
was
brought
in
as
Java
with
if
we
went
to
dotnet.
So
it's
been
a
little
bit
of
learning
curve
there
and
there
are
some
nuances,
but
in
the
whole
containerized
platform
and
how
to
think
about
that.
It's
been
a
difficult
phone.