►
From YouTube: OpenShift 3.x: Features, Functions and Future - Clayton Coleman and Mike Barrett (Red Hat)
Description
OpenShift Commons Gathering December 5th 2017 Austin, Texas
Clayton Coleman and Mike Barrett (Red Hat)
A
I'm
so
good
morning,
everyone
so
Mike
and
I
are
going
to
tag-team
this
I'm
on
the
architecture.
Side,
Mike's
on
the
product,
management
side
and
Chris
really
helped
set
us
up,
because
the
first
thing
I
actually
want
to
talk
about
as
we
get
started
is
OpenShift
is
a
platform.
Kubernetes
is
what
we
kind
of
think
of
as
a
kernel,
and
the
analogy
really
applies
to
the
idea
of
a
distribution.
So
today
you
go
get
your
kubernetes
and
you
add
a
few
things
to
it,
and
everything
works
great.
A
The
future
is
much
more
complicated
than
that
and
I
think
that's
what,
as
we
were,
starting
to
see
at
the
end
of
Chris's,
slides
we're
talking
about
more
and
more
pieces
coming
together
to
help
you
build
these
very
large
software
platforms
for
running
applications
for
doing
network
virtualization.
The
complexity
of
these
stacks
is
going
to
be
high.
A
One
of
the
things
that
we
really
believe
in
is
the
transition
from
platform
to
distribution,
which
is
the
idea
of
taking
the
core
pieces
of
the
kubernetes
ecosystem
and
the
projects
that
exist
around
that
ecosystem
to
satelite
those
hundreds
of
open
source
projects
that
Diane
had
in
her
slide
and
curating
and
bringing
those
as
they
become
stable
and
secure,
and
well
integrated,
I'm,
focusing
on
experience
focusing
on
install
management
lifecycle.
These
are
really
important
things
to
us
at
Red
Hat.
A
B
Just
so,
we
can
marry
the
platform
to
you
in
the
audience.
It
raise
your
hand
if
you're
from
openshift
engineering
at
Red
Hat
or
you
work
with
kubernetes.
These
are
these
individuals
are
here
to
start
interacting
with
you
and
if
you've
never
met
a
sort
of
a
code
contributor
to
the
project.
This
is
your
opportunity
to
meet
them
on
the
customer
and
the
partner
in
the
ISV
ecosystem
side.
How
many
people
are
in
like
financial
services
or
touch
money
in
some
way?
B
Ok,
and
how
many
are
like
pharmaceutical
manufacturing
and
how
many
are
a
VA
shanaar
utilities
in
Telecom,
so
there's
a
there's
typically,
a
those
are
the
primary
mixes
that
we
see
in
the
customer
in
the
ecosystem,
so
definitely
talk
to
each
other
and
and
we'll
help
facilitate
that
conversation.
Today,
okay,.
A
So
there's
far
too
much
for
us
to
cover
so
Mike
and
I
are
going
to
swiftly
cover
a
massive
ecosystem
of
projects
and
features
and
exciting
things.
Coming
I
will
absolutely
forget
things,
and
so,
if
you
find
me
or
Mike
later
in
the
day,
come
up
to
us,
we're
very
happy
to
talk
to
you
ad
nauseam
about
everything,
exciting,
that's
coming
in
ecosystem,
so
community.
First,
that's
what
Red
Hat!
A
It's
gonna
be
really
important
for
us
to
manage
this
transition
from
this
big
monolithic,
kubernetes
project,
which
a
lot
of
people.
Think
of
as
oh.
You
know
I
go
get
these
binaries
and
that'sa
kubernetes,
and
that
is
going
to
change
and
how
that
changes
is
going
to
be
many
more
projects
working
together,
collaborating
in
the
open
source
ecosystem
and
much
of
the
same
way
that
you
see
with
Linux.
A
The
focus
for
us
has
been
on
stabilization
a
lot,
but
it's
also
been
about
forming
that
that
healthy
community
keeping
it
going
so
the
kubernetes
steering
committee
was
formed.
Elections
were
held
a
little
bit
earlier
this
year,
so
Cooper
neighs
now
has
a
permanent
array,
a
one
in
two
year,
elected
term,
steering
committee,
and
this
is
a
group
of
people
who
are
intended
to
help
the
community
move
forward.
The
cigs
and
kubernetes
are
very
important.
These
are
the
groups
of
people
who
help,
contribute
and
drive
the
project
forward
and
all
the
different
areas.
A
Networking
cloud
providers-
storage.
There
is
a
new
top-level
sig
in
the
last
few
months.
It's
sig
architecture,
and
this
is
intended
to
kind
of
be
a
place
where
we
can
set
the
direction
of
kubernetes
the
Corps
and
also
help
identify
what
and
what
isn't
core
kubernetes
and
that'll
help
make
that
transition,
as
I
talked
about
from
platform
to
distribution
in
the
ecosystem.
B
Typically,
when
we
go
out
and
talk
to
users
of
these
technologies,
they
fall
into
four
camps.
One
of
the
camps
is,
you
know,
your
next
generation
applications
your
micro
services,
your
these
are
typically
your
line
of
businesses
that
are
trying
to
move
faster,
that's
very
much
greenfield.
Then
we
have
a
large
brownfield
footprint
of
revenue
generating
applications
that
are
using
technologies
today,
and
they
have
to
merge
that
with
what
they
did
yesterday
and
then
we
have
next
generation
IT,
ops
right
and
these
women
and
men
are
trying
to
reorganize
their
data
centers.
B
Maybe
that
means
moving
on
premise
to
a
public
cloud.
Maybe
that
means
something
else,
but
they're
looking
for
technologies
to
help
them
do
that,
and
then
there's
the
transformational
right
that
the
digital,
the
CTO
office
and
those
those
initiatives
when
we
look
at
that
for
footprint,
we're
looking
for
technologies
that
we
can
use
to
solve
all
four
of
those
use
cases
and
a
lot
of
these
patterns
come
out
right
content
is
king.
B
If
we
didn't
have
the
ability
to
give
you
this
content,
then
the
platform
wouldn't
really
shine
and
if
the
platform
didn't
shine,
we'd
really
wouldn't
give
you
a
way
to
have
that
new
content.
So
we
have
next-generation
cloud
native
services.
We
have
a
new
concept
with
our
middleware
business
units.
How
are
they
going
to
have
a
service
on
the
platform
instead
of
just
having
an
application
on
the
platform,
then
we
have
a
lot
of
low
latency
next
HPC
evolution
type
features,
but
I'll
get
into
a
little
bit
later.
A
First
off,
while
we're
still
talking
about
platform,
stability
is
probably
the
most
important
thing
being
a
reliable
foundation.
If
you
don't
have
a
reliable
foundation,
there's
no
point
to
building
something
on
top
of
it.
So
all
those
services
that
we
talked
about
before
all
those
features
they
depend
on
having
a
core
platform
that
is
stable.
So
in
kubernetes
one
seven
and
one
8
+
1
9.
A
There
was
a
very
strong
focus
on
fixing
bugs
moving
features
into
stable
patterns,
but
there
was
also
another
focus,
and
this
is
something
that,
on
the
redhead
side,
we
were
very
focused
on
which
was
production
matters
and
refining,
and
tightening
and
polishing
the
system
at
scale
in
some
of
the
most
demanding
environments
in
the
world
already
and
making
sure
that
we
have
a
good
foundation
to
build
on
for
the
next
several
years.
I'm
gonna
give
a
couple
of
examples.
A
A
Events
to
be-
and
you
know
the
actual
mechanics
are
lots
and
lots
of
low-level
details,
but
we
tried
to
fit
it
into
that
over
a
whole
of
what
is
it
going
to
make
both
allow
people
to
understand
what's
going
on
in
the
platform
at
a
very
fundamental
level
and
keep
that
core
feature
in
place
while
continuing
to
refine
it
and
polish
it?
A
side
effect
of
that
was
also
very
dense
clusters.
So
many
OpenShift
users
run
extremely
dense
clusters
where
they're
not
just
running.
A
So
we
added
a
number
of
features
that
make
it
easier
to
deal
with
very
large
data
sets
from
the
API
perspective.
So
anyone
who's
using
an
API
in
kubernetes
will
benefit
from
this.
But
it
also
enables
some
of
what
I'll
talk
about
in
a
little
bit,
which
is,
as
we
begin
to
make
kubernetes
a
platform
for
extension,
where
people
can
bring
new
types
of
infrastructure
api's.
We
talked
about
sto,
we
talked
about
rata
lytx.
Anyone
who's
building
API
is
on
top
of
kubernetes
will
also
benefit
from
some
of
these
improvements.
A
A
We've
worked
in
very
practical
environments
to
to
integrate
Prometheus
very
deeply,
to
make
sure
that
data
is
flowing
up,
but
also
to
keep
an
eye
on
those
early
cases
that
we
knew
of
where
people
monitoring
the
platform.
We
want
that
information
to
flow
up
not
just
into
Prometheus
but
into
cloud
forms
and
some
of
the
other
tools
and
technologies
that
our
customers
have
already
built
around
OpenShift.
A
We
we
use
these
metrics
to
help
guide
some
of
these
optimizations
we've
talking
about
and
to
really
focus
on
that
large
scale.
Kubernetes
an
open
ship
experience,
so
I'm
going
to
jump
through
some
of
these
and
leave
some
time
at
the
end
for
questions.
If
you
see
something
here
that
we
skip
over,
please
don't
hesitate
to
ask
we'll
make
sure
some
time.
At
the
end,
a
big
part
of
kubernetes
and
of
openshift
is
about
efficiently
using
resources
right.
A
That's
one
of
the
things
that
we've
always
heard
from
from
users
is
that
are
looking
to
build
applications
rapidly,
but
from
an
operational
perspective,
they
want
to
make
sure
that
those
resources
are
used
effectively,
and
so
a
key
part
of
that
lifecycle
is
understanding.
What's
running
where
which
kubernetes
is
pretty
good
at
today,
but
then
turning
the
the
flip
side
of
it?
A
What
resources
they're
using
CPU
memory
disks
and
optimizing
the
platform
to
ensure
that
all
applications
are
getting
a
reasonably
fair
share,
and
so
there's
a
lot
of
work
going
on
in
kubernetes
and
an
open
shipped
around
this.
Some
of
these
high-level
near-term
things
are
about
we're
really
working
to
standardize
kind
of
the
core
system
metrics.
A
So
there
have
been
projects
for
a
long
time
you
may
have
heard
of
heap
stir
and
C
advisor
and
we're
looking
to
turn
those
into
formal
api's,
so
that
other
components
can
depend
on
them
like
the
scheduler
and
we're
going
to
use
that
to
tie
the
platform
back
to
itself.
So
when,
when
you're
running
very
dense
clusters,
you'll
be
able
to
benefit
from
knowing
exactly
how
much
resources
are
in
use
at
any
one
point
on
the
cluster
and
feeding
that
back
in
auto
scaling
on
custom
metrics
on
application.
A
Metrics
is
also
really
important
to
us,
and
you'll
continue
to
see
that
evolve
over
the
next.
The
next
few
months,
extensibility
I
touched
on
this.
Just
a
few
minutes
ago
is
so
fundamental
to
platform,
because
a
platform
that
isn't
extensible
has
to
keep
implementing
features
until
it
implodes
under
its
own
weight,
and
our
goal
really
with
kubernetes
is
to
build.
The
mindset
of
kubernetes
and
openshift
is
to
be
an
open
platform
for
building
application.
Focused
infrastructure,
not
infrastructure,
focused
infrastructure
right.
The
point
of
the
API
is
not
to
go
get
a
VM.
A
The
point
of
the
API
is
to
run
your
applications
and
we
want
it
to
be
easy
for
people
to
use
those
tools
to
build
new
api's
that
are
very
application
focused.
So
we
talked
about
service
mesh
with
sto.
Many
of
the
things
that
we've
been
doing
on
extensibility
will
be
leveraged
by
the
osteo
project,
to
add
additional
service
policy
to
regular
kubernetes
applications
to
upgrade
applications
and
inject
intelligence
into
this,
and
as
a
long-term
Ark,
the
more
extensible
kubernetes
is
the
more
it
becomes.
B
Yeah,
that's
good
to
some
networking
before
I
do
how
many
people
are
designing
or
part
of
clusters
that
are
under
a
hundred
nodes
and
then
how
many
are
above
250,
nodes,
cool,
so
I'll.
Let
you
know
from
a
global
point
of
view
that
in
2016
I
would
say
we
had
the
majority
of
our
population
planning
for
around
100
nodes
2017.
B
When
you
start
getting
into
these
type
numbers
and
when
you
start
thinking
that
there's
on
average,
between
50
to
70
containers
running
on
any
given
node,
then
those
are
pretty
impressive
density
numbers
on
the
networking
side.
There's
a
lot
of
as
we
approach
those
higher
numbers.
We
start
to
see
some
inefficiencies
and
IP
tables
that
we're
working
on.
B
It
allows
you
to
look
at
pod
labels
and
really
control
who's
initiating
traffic
to
what
services,
and
that
really
opens
the
door
to
a
whole
new
level
of
granularity
control
in
the
network
that
we've
just
ever
had
it's
now
fully
stable
and
kubernetes
and
in
openshift.
So
it's
something
you
can
definitely
take
a
part
of
where
it's
growing
is
on
its
aggress
on
how
we
leave
that
that
cluster
there's
a
lot
of
clever
things
coming
to
bear
there
and.
A
How
can
I
make
sure
that
only
the
appropriate
applications
connect?
Those
because
I
have
a
corporate
policy
that
says
this
is
exactly
the
corporate
security
checklist
item
that
says
this
is
how
I
have
to
do
security
for
those
databases
and
so
taking
those
kind
of
that
kind
of
feedback
working
within
the
kubernetes
ecosystem
and
building
those
into
OpenShift
and
other
projects
in
the
ecosystem
has
been
a
real
big
focus
for
us.
B
A
B
A
you
must
be
lying
to
me
when
we're
on
the
phone,
because
it's
it's
extremely
important.
You
know
it's
coming
from
three
areas:
it's
coming
from
government,
it's
coming
from
telecom
and
it's
coming
from
the
OpenStack
community
who
just
got
ipv6
supported,
probably
I,
think
six
months
ago,
or
so
we've
run
a
lot
on
OpenStack,
and
so
those
synergies
are
coming
together,
where
it's
a
big
enough
population
for
us
to
really
push
it
forward.
I.
It.
A
B
Storage,
so
stateful
sets
really
came
in
into
its
essence
and
next
and
I
had
said
in
the
last
two
months,
or
so
we
really
close
the
gap
on
some
quarter,
things
that
were
left
on
the
table.
That
means
that
those
types
of
applications-
those
databases
are
looking
typically
for
local
storage
right.
They
want
that
high
throughput
on
the
host.
You
don't
want
to
be
designing
a
kubernetes
distributed
cluster
around
where
things
are
physically
attached,
so
the
scheduler
needed
to
be
made
a
little
more
smart
about
what
is
connected
to
those
nodes.
B
So
we
can
dynamically
schedule
something
based
on
that.
We
can
still
have
our
PV
and
our
PVC
concept
that
that
sort
of
user
experience
with
these
local
storage
devices
and
that
all
completed
into
alpha
stage,
and
it
should
be
ready
to
use
in
coop,
1.9
I
believe
in
in
3.7
that
just
came
out
in
November
it's
in
tech
preview.
So
please
it's
in
the
product,
give
it
a
try.
The
last
thing
there
is
resizing
and
snapshotting.
So
snap
seaneen
also
became
tech
preview
in
3.7.
B
It
allows
your
tenants
the
ability
to
go
ahead
and
snapshot
to
their
Peavey's
based
on
the
underlining
storage
technology.
So
it's
a
it's
an
AWS.
It's
a
cheap,
PC
type
thing.
You
now
have
that
ability
to
snapshot
based
on
that
underlying
technology
has
a
tenant
on
the
platform.
So
it
definitely
take
it.
Take
a
look
at
that
and.
A
You
know,
overall,
one
of
the
things
that
I
think
has
been
a
strong
focus
for
us
is
that
we
do
deal
with
the
hybrid
world
as
Chris
alluded
to.
Is
it's
not
just
software
on
one
side
of
the
equation?
Is
it's
a
it's
a
very
complicated
world
and
there's
very
there's
different
degrees
of
demanding
applications
that
need
to
run
and
depending
on
whether
you're
running
a
database
or
a
stateless
application
or
you're
running
a
machine
learning
framework?
A
What
we're
trying
to
do
is
set
a
path
in
kubernetes
for
some
of
these
core
concepts
and
allow,
through
extensions
and
other
features,
to
allow
more
complex
solutions
to
evolve,
and
you
know
this,
this
will
continue
to
evolve,
but
I
would
I
think
it's
very
useful
to
say
that
the
arc
that
we're
on
is
to
make
everything
possible
and
some
things
very
easy
and
to
give
people
the
tools
that
they
need
to
build.
Much
more
complex
and
sophisticated.
A
A
B
The
only
design
principle
we're
going
after
here
is
to
make
sure
you
have
a
choice.
So
as
you
as
you
start
to
get
into
these
next-generation
container
runtimes
that
are
more
focused
on
the
orchestration
layer,
we
want
to
make
sure
that
you
have
the
ability
to
choose.
One
of
them
and
you'll
always
have
that
choice
with
our
solutions.
B
B
Service
brokers
is
a
huge
part
of
that
and
it's
pretty
much
we're
in
the
age
of
the
service
broker.
At
this
point,
we
got
very
attracted
to
service
brokers,
because
our
community
wanted
us
to
really
give
a
better
user
experience
to
how
a
tenant
connects.
These
are
her
application
services
together
that
attach
itself
very
rapidly
to
this
age
of
wanting
to
bring
in
cloud
provided
services
into
the
data
center
and,
at
the
same
time,
house.
B
You
are
given
a
list
of
secrets
to
connect
to
that
service,
so
we
have
one
more
step
to
automatically
do
that
last
step
for
you
that
should
come
in
the
next
release
or
so
and
then
on
the
granularity
right
now,
all
the
services
are
the
same
for
everybody.
We
want
to
make
sure
that
Mike
Barrett
is
allowed
to
have
different
services
than
Clayton
and
that
Mike
Barrett's
on
a
different,
say
AWS
or
your
payment
program
than
Clayton
is
because
Layton's
a
hog
when
it
comes
to
spending
and.
A
And
just
like
service
kind
of
log
install
and
upgrade
I
talked
about
stability
and
reliability,
ensuring
that
we
make
every
upgrade
of
openshift
and
kubernetes
work
extremely
well
in
you
know.
I'll
be
totally
honest.
You
know,
openshift
deals
with
a
large
number
of
very
different
environments
and
very
different
customer
requirements,
and
that
is
our
focus.
A
You
know
our
installers
are
intended
to
work
on
every
cloud
provider
on
every
bare
metal
platform
everywhere
that
rail
runs,
and
so
we
have
to
work
through
these
cases
and
make
sure
that
their
support
as
well
so
in
opens
the
next
version
of
open
shift
is
actually
going
to
be
open
shift.
3
9
it
is
going
to
do
a
rolling
upgrade
through
kubernetes
1/8
directly
into
kubernetes
1/9,
and
so
it's
a
in
a
sense.
B
There's
a
subtlety
there,
that's
a
that's
a
skipping,
a
release
for
the
first
time,
which
meant
the
Installer
had
to
be
smart
enough
to
do
that,
behind-the-scenes
for
the
first
time
so
I
know
a
lot
of
you
in
the
community
want
to
make
sure
that
you
could
skip
releases
at
some
point
right
now,
we're
forcing
you
to
do
that
serially!
This
is
the
first
engineering
project
that
solves
that,
for
you
and.
A
A
little
bit
about
reference
architectures,
we're
expanding
the
amount
of
examples
and
guidelines
for
best
practices
for
installing
on
the
different
cloud
providers,
a
really
key
initiative,
underneath
that
is
moving
to
a
more
cloud
native
model
on
machines.
We've
always
had
to
bridge
that
gap
between
bare
metal
where
there
is
no
dynamic
provisioner
for
a
bare
metal
box,
all
the
way
up
through
VMs
and
into
cloud
providers,
starting
with
open
shift.
A
3:7
working
in
two
openshift
3:9,
there's
going
to
be
a
lot
of
focus
on
a
cloud
native
approach
to
individual
machines,
which
really
just
boils
down
to
the
machines
themselves,
will
be
stamped
out
of
images.
This
will
be
something
that
we
consider
standard
the
standard
way
we
install
and
deploy
machines
come
up.
They
connect
to
the
cluster.
An
administrator
can
either
auto,
approve
them
or
just
run
them
through
the
process.
A
B
Management,
so
the
the
product
right
now,
we've
always
given
you
the
ability
to
use
our
manage
IQ
open
source
project.
It's
productized
here
at
red,
head-to-heads
cloud
forms
we've
taken
it
extremely
far
in
this
next
release,
we've
modified
it
we've
run
admitted
fully
supported
in
containers.
It
meets
a
template
deployment
pattern
on
kubernetes.
At
this
point,
it
is
just
a
management
API
now
on
your
Koob
cluster,
and
this
has
an
amazing
amount
of
features
that
we
can
start
really
pushing
into
operational
best
practices.
It
has
chargeback.
A
Should
really
help
dealing
with
multiple
clusters
and
bringing
that
information
together
in
a
single
unified
dashboard,
so
we're
starting
to
run
a
little
bit
low
on
time
so
I'm
just
gonna,
tease
here,
there's
a
ton
of
great
work.
That's
coming
in
Coober,
that's
gonna
come
in
kubernetes
110.
We're
gonna,
obviously
continue
scaling
and
bug
fixes
extensibility
and
improvements.
If
I
had
to
say
one
thing
that
was
really
important
again:
I'm
gonna
go
back
to
that
resource.
Metrics
is
about
making
the
system
just
work.
A
The
autonomous
aspects
of
kubernetes
that
will
make
walking
away
from
a
cluster
and
having
everything
continue
to
tick
over.
So
even
some
things
that
aren't
even
on
this
slide,
red
headers
have
been
working
on
fencing
for
bare-metal
and
VM
environments.
That'll
make
it
easier
to
automate
recovery
actions
and
there's
a
ton
of
work.
A
That's
going
to
go
in
over
the
next
few
releases
to
tie
that
to
close
that
loop
between
application,
author
intent,
the
operational
platform,
the
operational
policies
that
administrators
have
put
in
place
around
quota
and
resource
usage
and
over
commit
and
reliability
and
tie
that
back
in
so
that
the
system
can
help
can
do
more
to
manage
itself.
So,
let's.
B
Talk
a
little
crisp
brought
this
up,
so
I'll
get
a
little
more
deeper
into
it.
When
we
talk
to
the
open
shift
in
the
origin
community
around
server
lists,
what
they're
really
looking
for
is
an
opportunity
to
have
a
different
pricing
model
and
what
we
can
bring
to
the
table
by
taking
a
server
list
technology
like
open,
wisk
and
bringing
it
into
kubernetes
primitives
and
making
it
a
user
experience
with
openshift.
Is
that
pricing
model?
B
So
when
you
look
at
function
based
computing
and
you
have
all
your
functions
designed
for
your
application
of
micro
services,
you
deploy
them
out,
they
hit
pods
that
are
running
those
runtimes.
They
execute
on
those
pods.
Now,
what
would
it
be
great
if
they
were
able
to
idle
if
they
were
able
to
use
HPC
or
HPA
custom
metrics?
A
I'll
note
that
those
are
some
of
the
same
additions,
those
improvements,
idling
resource
usage-
that's
actually
something
you
know.
Idling
has
been
in
the
OpenShift
product
since,
though
three
to
release
but
again
idling
and
the
ability
to
reduce
resource
usage
and
to
spread
workload
over
time
is
gonna,
be
really
important.
It
is
gonna,
be
a
key
focus
for
us.
A
Application
config:
this
is
kind
of
the
idea
that
config
is
a
very
complicated
thing.
There's
many
different
ways
to
define
applications
from
the
very
simple
micro
service,
all
the
way
to
something
like
OpenStack,
there's,
no,
one-size-fits-all
solution,
an
effort,
that's
under
that's
going
on
and
the
communities
community
is
to
try
and
blur
the
lines
so
that
we're
using
common
tools
so
that
we
have
common
ways
of
talking
about
what
applications
are
and
look
for
patterns
that
can
be
reused
in
multiple
ways.
A
So,
if
you're
deploying
giant
massive
applications,
you
may
want
to
deploy
that
giant
massive
application
all
at
once.
If
you
are
a
bunch
of
individual
teams,
you
may
want
to
reuse
the
same
tools
that
you
know.
A
giant
project
is
using
to
deploy
everything
in
your
individual
spot
and
each
individual
developer
might
want
the
flexibility
to
customize
their
tools
and
so
we'll
continue
to
evolve
how
applications
are
defined
as
kind
of
a
long
arc.
We
don't
think
this
is
by
any
means,
the
end
of
where
we'll
go
with
configuration
and
application
configuration.
A
B
B
So
it's
it's
a
pretty
popular
technology
from
our
reddit
point
of
view
and
what
we're
talking
to
our
customers.
It
falls
into
either
a
north/south
or
an
east-west
conversation
and
on
the
north-south
we've
been
champion,
H
a
proxy
for
quite
some
time,
and
we
have
a
list
of
requirements
that
people
have
wanted
us
to
close
the
gap
on
and
these
next
generation
like
web
proxies
closed
a
lot
of
those
requirements
for
us
right.
We
get
to
dynamically
change
URLs,
we
get
to
change
certs
in
a
much
more
automatic
fashion.
B
So
it's
a
it's
a
huge
leap
there.
It
also
does
HC
BT
2,
which
is
coming
up
quite
a
bit
on
the
east-west.
This
is
a
this
is
interesting
right.
If
whoever
thought
that
you
were
going
to
put
a
web
proxy
in
the
front
of
every
single
application
service,
that
would
be
insane
if
you
didn't
have
containers
and
if
you
didn't
have
a
container
platform
to
accomplish
that
right.
B
That
would
that's
voodoo
you,
don't
you
don't
put
a
web
proxy
in
front
of
every
application
service,
but
if
you
did
do
that
holy
cow,
look
at
all
the
things
that
fall
out
of
it
right
now,
you
can
you
can
meter
it,
you
can
control.
Who
is
the
ability?
You
know,
privacy,
there's
a
circuit,
breaker
concepts
and
it
solves
the
number
one
thing
that
Netflix
and
it's
OSS
components
failed
to
solve.
Thou
shalt
not
make
the
application
developer
develop
to
the
platform
right.
That
was
a
number
one.