►
From YouTube: Embracing Open Source for a Healthy Enterprise
Description
Optum Technology runs the technology for UnitedHealth Group. Kong drives the API ecosystem at Optum. This Kong Summit 2019 session details the journey of how Kong started at Optum and became the de facto solution. Optum engineers (and Kong Champions) Jeremy Justus and Ross Sbriscia discuss the steps it takes to be successful with open source in an enterprise, as well as the benefits a Fortune 6 can expect in embracing a community solution.
B
Everybody,
my
name
is
Ross
apprecia
I'm
was
hired
directly
out
of
college
by
Optima.
Just
over
three
years
ago,
I've
been
working
in
the
API
and
API
gateway
space
ever
since
and
I'm
also
proud
to
say
that
I
am
one
of
the
first
kong
champions
now,
since
our
talk
is
largely
focused
on
kong
and
open
source
in
large
enterprises
and
op
them
in
particular,
I
think
it's
only
fair
that
we
give
just
a
little
bit
of
background
on
Optim
itself.
So
what's
Optim
Optima
is
a
large
healthcare
services
and
technology
company.
B
It's
part
of
United
Health
Group
and
one
of
its
important
responsibilities
is
to
provide
the
technology
infrastructure
for
this
fortune.
Six
healthcare
giant
and
I
mean
you
can
see
I,
don't
need
to
say
this,
but
just
for
a
sense
of
scale.
300,000
employees,
thousands
of
api's,
countless
integrations
with
internal
and
external
systems.
There
really
is
a
lot
going
on
in
this
environment.
So,
let's
focus
in
on
one
particular
space
and,
let's
start
talking
gateways
primary
API
gateways.
Solution
for
UHG
in
the
final
months
of
2017
was
a
closed
source
vendor
product.
B
It
had
been
the
primary
solution
for
three
almost
four
years
at
that
point,
and
over
the
course
of
the
preceding
year,
we've
been
noticing
a
number
of
increasingly
concerning
stability,
scalability
and
performance
issues
with
the
platform
these,
in
conjunction
with
the
natural
relative
low
support
ability
of
the
solution,
meaning
that
it
was
difficult
in
some
cases
and
impossible
and
others
to
innovate
around
it
to
end
enable
things
like
DevOps
and
the
high
cost
associated
with
the
solution,
which
in
this
case
means
three
things.
First,
the
product
itself
was
pricey.
B
B
There
were
voices
from
all
across
the
enterprise,
everybody
from
engineers
to
lawyers,
to
architects
and,
importantly,
people
in
the
positions
of
power
and
influence,
and
then
we're
saying
things
like
well
what
would
happen
if
we
really
started
to
invest
in
these
open
source
solutions
and
devised
internal
processes
which
could
be
supportive
of
their
adoption
like
what
what's
possible
with
these
products?
And
it's
because
of
this
change
in
mindset
that
we
ended
up
with
a
selection
criteria
for
our
new
gateway
solution
that
looked
more
or
less
like
this.
B
We
were
interested
actively
in
adopting
an
open
source
product
as
opposed
to
a
proprietary
solution.
We
were
interested
in
the
extensibility
that
this
would
provide
us
as
well
as
frankly,
eliminating
the
licensing
costs
associated
with
our
existing
solution.
We
were
looking
for
a
cloud
native
application,
something
we
could
see
used
to
support
a
modern
CIT
CD
pipeline
with
decreased
recovery
time
in
the
event
of
incidents
and
better
scalability.
And
finally,
we
were
looking
for
a
more
performance
solution
than
our
existing.
B
Now,
if
this
sounds
kind
of
familiar,
it
should
because
I
think
this
describes
Kong
pretty
well
and
I
thought
so
too
at
the
time,
which
is
why
we
decided
to
POC
Kong
and
in
the
interest
of
time
I'm
just
not
going
to
go
into.
You
know
how
exactly
we
chose
Kong
as
opposed
to
some
other
products
and
instead,
let's
take
a
look
at
what
was
required
once
we
had
decided
that
we'd
like
to
POC
Kong,
to
bring
this
application
into
a
productions,
eight
right.
A
So,
where
some
of
the
initial
hurdles
we
faced
when
bringing
a
newer
open-source
technology
like
Kong
into
an
older,
established
enterprise,
well,
the
first
was
being
able
to
gain
the
approval
of
leveraging
the
open-source
product
within
our
network,
so
that
came
down
to
a
lot
of
formal
documentation
of
the
application
and
its
capabilities
within
an
internal
portal
that
other
peat
teams
and
people
within
our
company
could
go
to
to
also
leverage
the
product.
So
after
we
did
that,
we
then
moved
on
to
essentially
security
requirements.
A
So
when
you
people
and
users
call
us
from
the
public
internet,
we
have
to
worry
about.
You
know
malicious
actors.
So,
luckily
for
us,
we
already
had
a
Web
Application
Firewall
solution
that
we
were
able
to
integrate
as
part
of
our
ingress
into
the
API
gateway
architecture.
And
finally,
we
iterated
with
policy
makers
on
a
new
flow
where
we
used
to
have
two
gateway
hops.
To
reach
a
lot
of
our
services.
A
A
After
that,
we
did
some
initial
tunings
around
calling
and
kind
of
established
count
some
central
patterns
for
how
we
want
to
deleverage
the
Gateway
one
being
we
wanted
a
singular
Roth
token
generation,
endpoint
that
all
clients
could
leverage,
because
our
older
established
gateway
solutions,
all
leveraged,
a
single
client,
token
generation
in
point,
which
was
better
for
documentation
as
well
for
customers.
And
then
we
also
worked
with
the
con
config
template
to
work
on
things
like
tweaking
the
buffer
sizes,
as
well
as
better
TCP
and
socket
connection
management.
A
After
we
had
done
some
of
that.
We
then
moved
on
to
wanting
to
make
sure
that
if
we're
gonna
offer
this
service
to
Yahoo
health
groups
api's,
we
needed
really
robust
high
availability
and
disaster
recovery
baked
into
our
architecture.
That
was
really
important
to
us
and
early
on.
We
had
about
three
rudimentary
alerting,
telemetry
and
monitoring
solutions
in
place
for
the
Gateway
and
I
would
say
by
now
we're
probably
integrated
with
close
to
10,
plus
logging
and
alerting
and
solutions,
basically
giving
us
unparalleled
visibility
into
the
gateway,
run-time
cool.
B
B
B
Inspiration
is
a
bit
tricky
to
pin
down
so,
let's
focus
in
on
expertise
in
these
products
and
take
a
look
at
both
a
closed
source
and
open
source
model
and
see
where
the
expertise
comes
from,
starting
with
the
former.
So
when
a
closed
source
model,
expertise
is
rendered
by
experts
who
are
effectively
rented
and
only
offer
their
expertise
on
specific
requests.
This
means
that
they
can
only
react
but
not
innovate,
which
is
further
hampered
by
the
fact
that
these
experts
are
not
familiar
with
the
specific
conditions
of
your
environment.
B
B
Now,
let's
contrast
this
with
the
open
source
model
in
an
open
source
model
when
we're
cultivating
internal
experts,
these
experts
have
the
opportunity
to
leverage
their
skills
and
expertise
for
the
benefit
of
the
entire
space,
the
solution
itself
and
all
of
its
integrations.
This
is
bolstered
by
the
fact
that
these
experts
are
fully
versed
in
the
specifics
of
the
environment.
They
work
in,
and
all
of
this
is
enabled
by
the
fact
that
the
product
itself
is
entirely
visible
and
your
team
has
no
limits
on
its
understanding
of
the
solution,
so
the
question
kind
of
becomes.
B
How
do
we
actually
go
about
in
a
practical
sense,
cultivating
these
internal
experts
and
this
kind
of
leads
me
to
the
second
idea
that
I'd
like
to
bring
up
in
the
context
of
having
confidence
in
open
source
products
and
large
enterprises,
and
that
is
that
of
community
participation.
Now
this
gets
thrown
around
a
lot
in
the
open
source
community
and
we
decided
that
to
help
organizations
that
were
maybe
thinking
the
same
way.
B
It
looks
like
we're
receiving
some
HTTP
502
s
during
our
testing,
but
the
weird
part
is:
we
can't
find
these
transaction
logs
in
Kong's
logging
solution
and
we
had
integrations
with
an
internal
logging
solution
using
the
HTTP
log.
Plugin
that'll
be
important
in
a
minute,
so
we
took
a
look
and
we
said
well,
you
know
we
found
the
results
in
our
logs,
but
they
were
in
the
standardout
logs
and
it
looks
like
you're
receiving
a
transport
layer
failure
specifically
a
connection
reset
when
we're
trying
to
route
to
your
upstream
service.
B
So
we
let
them
know-
and
you
know
they
got
to
take-
that
back
and
and
diagnose,
what's
up
with
their
deployment.
But
for
us
we're
left
a
little
bit
concerned.
We
want
to
have
logs
of
these
failures
and
it's
arguably
more
in
important
to
have
logs
and
records
of
these
than
it
is
for
successful
transactions,
and
so
it's
kind
of
put
yourself
in
our
shoes.
At
this
point
we
had
worked
almost
exclusively,
if
not
entirely
exclusively
with
vendor
products
throughout
the
course
of
our
whole
careers.
B
So,
in
a
situation
like
this,
we
would
have
been
tempted
to
many
of
you
know
the
drill
make
a
support
ticket
get
some
logs
together,
maybe
get
a
meeting
in
a
couple
of
weeks
to
talk
about
the
problem.
In
this
case
we
decided
to
make
a
post
on
cognition
and
we
said
hey.
It
looks
like
on
layer
for
transport
layer
failures,
the
con
h-2b
log
plugin
just
isn't
logging.
The
events.
B
Does
anyone
know
why
and
we
weren't
quite
sure
what
to
expect
when
we
did
this,
we
weren't
sure
if
we
were
gonna
get
a
response
in
a
couple
of
weeks
or
a
couple
of
months
if
it
was
gonna,
be
intelligible
or
helpful
at
all.
So
imagine
our
surprise
when
that
day,
none
other
than
one
of
the
principal
engineers
for
Kong
itself.
Mr.
B
Tebow
responds
and
says:
hey
give
this
a
shot,
see
if
you
can
just
put
this
little
code,
snippet
in
your
air
handler
block
and
see
if
that
doesn't
do
the
trick
for
you,
and
so
we
thought
wow.
This
is
great,
so
we
grabbed
this
code.
Snippet
we
put
in
our
nginx
comp,
we
rebuild
our
dev
environment
and
didn't
work
undaunted.
B
We
made
another
post
on
cognition,
describing
exactly
what
we
tried
in
as
much
detail
as
possible
and
included
all
of
our
sources,
and
the
community
responded
again
and
Tebow
says
well.
I
may
have
spoken
too
soon.
It
looks
like
nginx,
Azera
page
is
doing
this
internal,
redirect
resetting
nginx
context
and
preventing
plugins,
including
the
hdtb,
lock
plug-in
from
running,
and
we
said
you
know
what
we
think
we
can
roll
with
that.
B
Maybe,
and
so
we
took
it
back
and
we
did
a
little
investigation
and
we
found
something
we
made
another
post
and
we
said
we
may
have
found
something
interesting
in
a
similar.
Open
source
product
turns
out.
There's
this
library,
which
may
allow
us
to
share
nginx
context
between
these
sub
requests.
What
do
you
guys
think?
Can
we
build
this
into
Kong,
somehow
and
an
engineer
that
we
hadn't
worked
with
up
until
that
time,
but
I
have
since
gained
great
respect
for
mr.
B
bungle
comes
back
and
says:
well,
you
know
we
took
a
look
and
we
have
two
solutions.
One
of
them
incorporates
your
suggestion.
Let's
noodle
this
for
a
bit
and
we'll
get
back
to
you
all
right.
So
not
too
long
after
that
we
saw
a
PR
be
merged.
They
made
and
merged
by
Kong,
which
contained
a
solution
that
adopted
our
suggestion,
and
we
were
informed
to
expect
this
in
an
upcoming
release
candidate
in
just
a
couple
of
days,
so
great
so
we're
thrilled
at
this
point.
B
But
a
lot
just
happened
there
and
I'd
like
to
unpack
this
just
a
little
bit,
so
we
can
highlight
an
important
interaction.
So
what
really
went
down?
First,
we
had
a
problem.
Next
we
investigated
that
problem.
Third,
we
came,
the
issue
came
back
to
something
in
Kong
and
we
engaged
with
the
community
on
that
issue.
The
community
suggested
a
solution
which
we
attempted
didn't
quite
work,
so
we
re
engaged
the
community.
They
provided
us
some
feedback
which
we
use
to
inform
an
investigation.
B
We
suggested
a
solution
based
on
that
investigation
and
the
community
confirmed
that
solution.
Now.
What
I'd
like
to
draw
your
attention
to
is
the
fact
that
this
step
occurs
twice,
so
why
did
our
first
investigation
of
the
problem
yield
such
a
different
result
from
the
second
investigation
of
the
problem?
What
changed
turns
out
that?
What
changed,
at
least
for
us
was
the
nature
of
the
problem.
It
went
from
being
in
a
closed
source
mindset.
Oh
there's
something
wrong
with
this.
B
A
Let's
look
at
the
scenario
where
we
ran
into
something
and
led
to
us
being
able
to
contribute
back
to
the
core
of
Kong,
so
I
was
just
working
on
regular
gateway
things
one
day
and
a
customer
reaches
out
and
basically
says
what
the
customer's
always
say.
Oh,
is
the
Gateway
broken
something's
down
today,
calling
my
api's
I'm,
seeing
HTTP
431
testing
my
API
directly?
It's
working
fine!
What's
going
on
so
you
know,
I
start
looking
into
it!
I
realized!
You
know
HTTP
431!
A
Well,
there
must
be
some
headers
that
are
getting
sent
to
big
back
to
your
web
server.
So
it's
just
reaching
the
header
buffer
limit
size
and
rejecting
the
transaction,
so
I
started
doing
a
little
bit
more
digging
and
realized
that
there's
the
ACL
plug-in
in
Kong
and
the
ACL
plug-in
by
default
will
include
something
called
X
user
groups.
So,
with
this
header
in
mind,
in
our
architecture
for
every
Kong
proxy,
we
have
there's
a
route
resource
that
route
resource
has
a
UUID.
We
use
those
uu
IDs
to
inform
how
customers
have
access
to
different
proxies.
A
So
you
can
imagine
in
a
large
enterprise
when
you
have
lots
of
proxies,
say
hundreds
or
thousands,
that
header
is
going
to
get
filled
with
what
hundreds
or
thousands
of
uu
IDs.
So
knowing
that
and
knowing
how
it
was
growing
in
size
and
could
potentially
cause
HTTP
for
30
ones
like
in
this
scenario,
I
rose
raised
a
nice
little
git
issue
to
Kong
discussing
a
couple
of
points
and
bringing
up
the
idea
of
well.
Maybe
this
header
value
could
just
be
configurable.
You
know
disable
enable
it
lo
and
behold
an
engineer.
I
respect
from
Kong.
A
Mr.
bungle
came
in
and
said
you
know,
I
give
the
thumbs
up
for
that
change,
worthwhile
changed
implemented
in
the
ACL
plugin.
So
now
that
I've
worked
with
calling
for
a
little
while
I
started
understanding
a
good
number
of
portions
of
the
codebase,
and
you
know
courage
in
my
heart:
I
decided
to
go
ahead
and
get
into
the
code,
and
you
know
figure
out
where,
in
the
plug-in
I
can
make
that
adjustment.
It
was
great,
you
know,
Tibo
looked
it
over
merged
it
in
and
it
all
was
awesome.
A
You
know
I'm
pretty
cool,
to
be
able
to
commit
something
back
to
a
code
base
that
I
knew
then
other
people
in
calling
may
not
even
know
this
that
interaction
to
happen,
but
they're
gonna
benefit
from
it
regardless,
but
even
better.
Is
that
being
an
open-source
and
being
the
fact
that
every
change
that
goes
on
in
an
open-source
product
has
lots
of
eyes
on
it?
Another
member
from
the
Kong
team
just
reviewed
it
and
was
like
hey
I,
think
we
could
go
even
further
with
this.
A
You
know
you
may
have
stopped
the
header
for
being
passed
to
the
back
end
and
solved
for
your
use
case,
but
down
in
the
lua
level.
There's
also,
this
variable
that's
getting
set
to
all
those
uu
IDs
under
the
hood.
But
if
we're
enabling
your
configuration
and
not
set
that
header
to
send
to
the
backend,
we
can
just
set
it
to
an
empty
string
and
save
on
Lua
memory
pressure
as
traffic's
going
through
Kong.
A
So
with
that
in
mind-
and
he
gave
me
a
nice
little
code-
snippet
of
test
since
my
other
PR
had
already
been
merged
at
that
point,
I
did
a
follow
up
PR,
just
implementing
his
suggestions.
Now
one
thing
I
want
to
point
out.
That
may
not
have
been
noticed
in
those
two
PR
examples
is
the
timings
on
them.
They
were
merged
the
exact
same
day
I,
as
just
some
third-party
contributor
sent
them
off
to
Kong,
and
that
kind
of
turnaround
time
is
phenomenal
and
I.
Think
it's
one
of
the
driving
reasons.
A
A
So
with
that
in
mind
and
our
expertise
growing
in
Kong,
it
was
very
important
that
we
offered
this
gateway
service
internally
as
a
product
to
our
customers.
So
what
exactly?
Does
that
mean?
Well
for
us,
we
took
a
very
Doc's
first
approach,
so
this
has
two
main
benefits.
You
know
when
we
were
a
small
team
of
two
engineers
you
just
can't
join.
You
know
customer
calls
constantly
with
Q&A
sessions
about
how
to
leverage
the
Gateway
so
with
our
Doc's,
which
have
really
robust
details
on
how
to
leverage
the
Gateway.
A
What
we're
offering
for
how
you
want
to
leverage
it
within
the
company
and
I
want
to
talk
a
little
bit
about
kind
of
the
operations,
my
organization
of
how
we've
had
when
we're
and
with
this
gateway.
When
we
first
started
off,
we
had
just
production,
alized,
the
gateway.
There
was
no
ecosystem
around
the
Gateway
we
were
offering.
So
what
do
we?
Do?
We
had
an
email
distribution
list
that
we
would
work
with
customers.
A
So
when
we
had
four
to
five
customers,
it
was
alright
communicating
through
email
to
do
things
like
creation
of
proxies
authorizing
consumers
and
managing
things.
That
way,
as
we
got
more
and
more
consumers,
that
became
quite
the
nightmare,
you
got
people
responding
to
the
wrong
email
chains,
everybody
responding
at
the
same
time
to
the
same
email.
It
just
didn't
work
out
very
well,
so
we
iterated
one
more
time
had
a
work,
ticket
queue,
processing,
first-in,
first-out
kind
of
set
up.
You
know,
customers
didn't
have
any
visibility.
A
Hardly,
though,
into
the
progress
that
have
been
made
on
their
tickets
and
at
the
end
of
the
day,
the
root
problem
is
you
have
a
human
making,
the
request,
the
human
trying
to
process
that
request
and
the
miscommunication
and
typos
that
comes
from
that,
so
we
iterated
one
more
time
into
a
process.
I
think
that
we're
pretty
happy
with
now,
which
is
our
github
ops,
self-service
model,
and
this
is
a
solution
that
basically
customers
do
PRS
into
a
github
repo.
A
This
kicks
off
an
intelligent
agent
via
a
web
hook,
and
then
we
process
on
those
resources
against
the
Kong
admin
API
directly
to
create
what
the
customer
needs.
So
what
you're
looking
at
right
here
is
a
very
high-level
view
of
the
self-service
flow.
Very,
very
importantly,
is
the
first
bubble.
Customers
do
need
to
read
the
docs.
A
You
know
if
they
don't
read
the
docs,
the
amount
and
sheer
number
of
questions
they
would
have
for
the
engineering
team
internally
here,
myrin
Ross
with
just
overwhelm
us
so
step
one
most
important
step,
but
after
the
customer
reads
the
docs
on
how
to
leverage
it.
They
would
then
make
a
PR
to
get
hub.
That's
going
to
cause
a
github
webhook
to
kick
off
to
our
intelligent
agent
to
then
go
and
review
those
resources
that
were
submitted
by
customers.
If
the
resources
show
you
know,
post
is
valid
through
our
taxonomy
and
governance
program.
A
A
A
A
So
with
the
self-service
adoption.
Now
we
have
over
300
unique
users
leveraging
this
internally
2,000
interactions
every
month,
so
that
equals
2,000
interactions
that
we
didn't
have
to
manually
talk
to
a
customer
and
work
with
them
to
process
on
their
proxies,
their
resources,
their
driving,
it
all
themselves,
and
it
just
frees
up
a
lot
of
time
for
our
engineers
as
well
as
we
really
like
the
kind
of
native
github
integration
to
where
you
can
tell
who's
changed.
A
And
then
this
is
kind
of
thing.
I
think
everybody
likes
to
know
is
your
sense
of
scale
when
you're
working
with
Kong
API
gateway
technology.
This
is
where
we
are
and
where
we're
going,
and
you
know
we
have
lots
of
api's
lots
of
consumers
high
level
of
transaction
volumes,
and
we
are
fully
confident
that
Kong
is
going
to
be
able
to
support
our
needs
all
the
way
through
this
journey.
Right.
B
So
this
is
a
completely
shameless
glamour
slide
here.
If
you
want
to
show
somebody
one
slide,
that
gives
us
context
on
how
much
Kong
helped
Optum.
This
is
the
slide
to
show
them
and
and
let's
just
jump
right
in
so
let's
talk
about
performance
Kong
have
has
produced
an
85
percent
reduction
in
gateway
overhead
compared
to
our
previous
proprietary
solution,
in
addition
to
being
90
percent
more
resource
efficient,
and
you
can
kind
of
see
that
here
what
you're
looking
at
is
a
graph
that
shows
the
results
of
three
separate
comparison.
B
Tests
in
the
blue,
a
test
against
an
API
directly
have
to
be
a
go,
laying
API
it's
pretty
fast,
with
an
average
response
time
of
6
milliseconds
in
the
orange
the
same
API
with
a
Kong
proxy
in
front
of
it.
The
average
response
time
is
17
milliseconds,
and
it's
only
really
until
you
look
at
the
skyscraper
on
the
left
that
you,
you
can
kind
of
visualize
exactly
how
much
better
Kong
is
than
our
previous
solution
in
terms
of
raw
performance.
B
This
kind
of
brings
me
into
cost,
which
is
another
big
one,
especially
for
enterprises,
obviously
going
from
a
proprietary
license,
solution
to
Kong
Community
Edition
eliminated
our
licensing
costs,
which
is
a
good
thing,
and
the
fact
that
Kong
is
more
extensible
and
we
were
able
to
engineer
these
DevOps
and
abling
tools
around.
It
has
allowed
us
to
reduce
our
operation
staffing
by
85%,
that,
in
conjunction
with
the
fact
that
Kong
as
much
were
resource
efficient,
meaning,
we
can
run
with
less
hardware.
B
It's
a
truckload
of
cash
in
terms
of
support
ability,
like
I,
was
talking
about
earlier
you're,
looking
right
now
at
the
entire
Kong
team
at
optimum
visit
it
it's
on
stage
last
couple
of
weeks
that
the
team's
grown
a
little
bit
for
almost
two
years,
we
were
supporting
the
operations
needs
of
a
three
hundred
thousand
person
company,
in
addition
to
pursuing
all
of
our
engineering
goals.
Getting
those
monitoring
alerting
things
set
up
iterating
every
time
a
new
version
of
Kong
came
out,
iterating
on
that
version
of
Kong
made
sure
we
were
getting
it
tested.
B
So
if
this
is
possible,
it's
clear
that
we're
dealing
with
a
high
quality
product
with
DevOps
in
mind
and,
let's
wrap
up
by
coming
back
to
the
question
that
we
started.
This
whole
thing
with,
which
is:
how
can
we,
as
a
large
enterprise,
have
confidence
in
these
open
source
products
and
particularly
in
mission-critical
applications?
And
so
here's
our
answer
step.
B
One
set
the
stage
ensure
first,
that
you
have
a
company
culture
which
is
curious,
maybe
even
excited
about
open
source
products,
and
if
you
don't
have
it
start,
it
revisit
any
existing
integrations
that
you
might
have
had
with
the
previous
solution,
see
if
there's
some
low-hanging
flute
fruit
for
you
to
try
and
tackle
and
offer
a
stable
solution
from
the
start.
This
is
how
you
can
start
off,
confident
how
you
can
stay
confident
is
by
cultivating
internal
experts.
B
This
enables
you
to
innovate
and
improve
around
the
solution
itself
and
also
all
of
its
integrations
within
your
environment.
The
way
you
do
that
is
by
participating
in
the
community,
this
does
two
things.
It
helps
you
share
knowledge
and
expertise
both
from
yourself
with
the
community
and
vice
versa,
and
provides
you
an
avenue
for
solving
problems
that
you
might
not
be
comfortable
solving
on
your
own.
B
Yet
well,
you
take
yourself
to
the
next
level,
is
by
contributing
back
to
the
community,
there's
no
better
way
to
hone
your
expertise
than
by
getting
direct
feedback
on
your
literal
work
and
the
community
benefits
from
this
as
well.
Finally,
offer
a
complete
product
to
your
internal
customers,
enable
and
engineering
culture
by
providing
a
Doc's
first
approach,
keep
your
standards
high
and
you
will
unable
your
internal
customers
to
innovate
and
engineer
around
your
solution
in
the
same
way
that
you
can
innovate
an
engineer
around
the
open-source
product
in
general,
and
that's
it.
Everybody.