►
From YouTube: State of OpenShift on Bare Metal: Jose Palafox, Intel at OpenShift Commons Gathering Seattle 2018
Description
State of OpenShift on Bare Metal: Intel PoV | Jose Palafox (Intel) | David Cain (Red Hat) | Jeremy Eder (Red Hat)
https://commons.openshift.org/gatherings/Seattle_2018.html
A
B
A
A
So,
just
a
little
bit
of
a
background,
I
don't
know
if
folks,
here
in
this
room
have
seen
maybe
over
the
past
10
months
11
months,
there's
been
a
lot
a
lot
of
announcements
in
the
greater
community
here
talking
about
the
bare
metal
cloud
market,
you
know.
Historically,
cloud
providers
have
offered
mostly
virtualized
instances,
and
what
we've
seen
right
now
is.
A
Most
of
them
are
offering
the
ability
to
consume
bare
metal
instances
on
their
respective
platforms
right
there,
and
we
we
have
its
two
analyst
reports
that
are
one
from
Grand
View
research
that
says
that
by
2025
this
is
meant
to
be
a
twenty
six
billion
dollar
market,
with
an
annual
growth
rate
of
about
forty
percent.
So
you
know
really
really
spinning
up
the
tails
there
and
just
examples
like
AWS
Oracle,
the
IBM
cloud
all
have
Frings
where
you
can
consume
this,
but
it's
not
just
public
clouds.
A
Also
private
clouds
OpenStack
had
its
annual
conference
semiannual
conference
in
Berlin
just
a
month
ago,
and
the
user
survey
results
are
summarized
in
here
on
the
bottom
right,
which
which
emerging
technologies,
interest
OpenStack
users
broadly
and
you
know,
OpenStack
is
a
platform.
You
can
install
open
shifts
off
kubernetes
on
the
top.
Three
were
containers
bare
metal
and
hardware
accelerators
there,
so
a
lot
of
interest
in
both
public
and
private
clouds.
A
Today,
what
we've
seen
the
majority
of
open
shift
and
kubernetes
deployments
are,
are
in
virtual
machines,
whether
that
be
deployed
in
VMs,
the
top
of
OpenStack,
whether
that
be
in
public
cloud
platforms
or
in
traditional
virtualization
like
vSphere,
and
we've
seen
that
reasoning
for
that
is.
You
know
a
lot
of
folks
have
standardized
on
these
platforms
right
here
and
they've
come
to
rely
on
a
lot
of
the
automation
and
a
lot
of
the
orchestration
that
these
platforms
provides.
A
So
it's
in
some
cases
a
lot
easier
to
get
started
with
deploying
a
kubernetes
or
an
open
shift
environment
on
virtual
machines,
just
because
it's
a
lot
easier
than
finding
a
stack
of
bare
metal
hardware
right
there.
But
what
we've
found
is
the
interest
is
definitely
growing
and
deploying
kubernetes
slash
open
shift
on
bare
metal
driven
by
a
couple
factors
that
I
want
to
take
you
through
here,
certainly
from
the
keynote
earlier
from
Chris
Wright,
who
spent
a
slide
there
on
on
bare
metal.
A
One
aspect
is,
you
know,
specifically
reducing
that
VM
sprawl
and
cost
having
that
software
and
that
infrastructure
is
expensed
there,
but
also
really
a
lot
of
these
emerging
applications.
Are
there
like
a
couple
examples,
given
we
have
running
databases
specifically
directly
on
bare
metal
or
artificial
intelligence,
machine
learning
or
applications
that
specifically
benefit
from
having
targeted
access
to
dedicated
hardware
devices?
We
call
those
you
know,
workload
accelerators
there
and
definitely
in
the
last
four,
the
version
310
of
open
shift
with
device
manager.
A
B
When
we
talk
about
accelerators
at
Intel,
we
have
actually
a
pretty
broad
portfolio
of
accelerated
accelerated
Hardware
products
to
complement
our
core
Zeon
offering.
So
this
is
just
a
quick
splash
of
some
of
them
that
we
have
I,
think
there's
probably
more,
but
we've
made.
You
know
the
alterior
acquisition
brought
with
it
a
lot
of
FPGAs
and
then
we've
made
a
number
of
other
acquisitions
in
the
same
space
to
expand
our
portfolio.
B
So
when
we
think
about
how
we
use
these
accelerators
in
production,
it's
not
just
as
simple
as
you
know,
racking
it
up
and
then
magically
it
works.
There's
got
to
be
some
some
stuff.
We
do
behind
the
scenes
to
set
that
up,
and
so
where
we're
headed
is
kind
of.
You
know,
adding
these
accelerators
and
making
them
schedulable
resources
inside
of
kubernetes
or
inside
of
OpenShift.
So
we'll
talk
through
kind
of
the
strategy
for
that.
B
So
if
you
think
about
what
Intel's
role
is
in
in
Kerber
Nettie's,
or
at
least
what
my
team
is
working
on
inside
of
group
Intel,
we
take
some
hardware,
accelerator
or
Xeon
feature,
and
we
try
to
expose
it
to
kubernetes
so
that
it's
aware
and
able
to
schedule
against
that
resource.
Then
we're
looking
at
taking
a
workload,
something
like
Redis
or
maybe
even
just
a
task
like
compression,
decompression
or
TLS
offload.
B
Something
like
that
that
we
can
use
an
accelerator
for
and
then
we
make
changes
to
the
upstream
product
to
take
advantage
of
the
accelerator
once
we
have
that
done.
We're
working
with
Red
Hat
on
the
operator
framework,
and
so
our
plan
is
to
try
and
help
accelerate
the
ecosystem
around
operators,
which
you
heard
about
earlier
today,
so
that
there's
a
library
of
common
utilities
available
for
people
to
manage
these
workloads
and
then
the
last
step.
B
I
think
where
we're
going
is
to
help
write,
open
service
brokers
or
some
way
to
expose
this
inside
of
as
so.
The
technology
pieces,
maybe
will
shift
a
little
bit
here,
but
the
general
story
is
how
do
we
take
the
accelerator
feature?
Expose
it
to
the
scheduler,
make
sure
it's
consumable
and
then
make
it
a
developer,
self-service
ordering
process
for
you.
So
that's
that's
sort
of
our
roadmap
and
to
give
you
an
idea
of
what
that
looks.
Like
we've
highlighted
two
two
examples
here
that
maybe
are
relevant
to
this
community.
B
So
if
you
think
about
our
portfolio,
we
just
introduced
new
memory
medium
that
we
are
able
to
offer.
You
know
as
a
instead
of
ddr4
or
not
instead
of
ddr4,
but
to
complement
ddr4
for
in-memory
databases.
So
we
create
this
new
product
great.
What
does
that
do
for
us
now?
We
have
to
make
sure
that
drivers
for
it
are
available
in
Linux,
so
we
work
with
Red
Hat
and
the
rel
team
to
make
sure
that
the
kernel
can
support
this
new
hardware
product.
B
Then
in
kubernetes
we
write
a
CSI
driver,
so
this
is
an
extension
point
in
kubernetes
for
storage
that
allows
you
to
use
the
memory
as
a
storage
device.
So
once
we
publish
the
driver
which
we
just
released
today.
Actually,
so,
if
you
go
to
the
Intel
github
page
you
can
you
can
find
our
new
CSI
driver
for
a
persistent
memory.
B
We
then
go
and
modify
the
upstream
open
source
products
that
can
use
persistent
memory,
so
in
this
case
I
selected
Redis
as
an
in-memory
database,
then
we'll
work
with
the
operator
community
to
standardize
on
an
operator
and
then
write
the
open
service
broker.
So
you
kind
of
see
we
have
like
a
vertical
stack
of
enabling
activities.
That's
going
through
the
ecosystem.
B
Another
example
that
we
brought
on
is
with
sto
and
Envoy.
So
in
this
case
we
wrote
a
driver
for
our
quick
assist
technology
which
can
do
TLS
offload.
Then
we
make
sure
that
the
upstream
products
can
utilize
qat
after
exposing
it
to
the
scheduler
and
then
we're
building.
You
know
a
community
around
making
sure
that
this
use
case
can
be
taken
advantage
of
both
on
Prem
and
in
context.
So
these
are
just
two
examples.
There
is
a
current
PR
out
for
some
of
the
changes
we've
made
an
envoy
if
you're
interested
in
checking
it
out.
B
We've
added
the
link
there
so
yeah.
If
these
are
interesting
use
cases
for
you,
please
please
let
us
know
and
I'd
love
to
kind
of
you
know
talk
to
folks
afterwards
about
what's
important
to
you
with
accelerators,
because
I
think
this
is
kind
of
an
an
emerging
space
of
thinking
at
least
a
cloud
native
context.
So
how
do
we,
you
know
how
we
go
forward
is
definitely
something
we
were
looking
for,
a
feedback
on
and
hoping
you
can
help
influence
with
us.
C
Okay,
so
yeah
thanks
Dave,
and
this
morning
we
had
Chris
on
stage
Chris
right
and
I
just
wanted
to
share
this
slide
with
you.
So
I've
got
some
arrows
pointing
to
things
that
are
kind
of
a
feather
in
our
cap,
because,
if
you're
an
engineer-
and
you
get
a
one
bullet
on
a
keynote
slide-
that's
some
kind
of
an
achievement.
We
got
two
bullets-
oh
it's
like
so
we're
talking
about
bare
metal
here
and
we're
talking
about
performance,
sensitive
applications
and
enabling
those
sorts
of
things
are
key
to
our
customers.
C
As
the
feedback
we
hear
as
well
as
you
can,
if
the
tea
leaves
are
pretty
easy
to
read.
Public
cloud
providers
are
adding
accelerators
of
all
kinds
as
differentiated
offerings
such
as
TPU,
GPUs
FPGAs.
Those
sorts
of
things
are
all
commonly
bare.
Metal
servers
are
all
commonly
available
in
a
cloud
provider
at
this
time.
C
Okay,
so
I,
don't
know
if
you've
seen
this
slide
before,
but
I'm,
it's
actually
mandatory.
That
I
show
it,
and
this
is
a
project
we've
been
leading
for
a
couple
of
years
in
the
upstream,
along
with
Intel
and
other
vendors,
to
bring
performance,
sensitive
workloads
to
kubernetes.
The
extent
of
my
graphic
design
skills
is
here,
and
you
can
see
how
the
idea
here
is
just
that.
C
There's
some
overlap
between
all
of
these
high-performance
workloads
and
the
initial
work
that
we
should
be
doing
as
as
open
source
vendors
open
source
developers
is,
we
should
be
working
horizontally
to
enable
whatever
work.
Would
you
want
to
run
to
run
on
top
of
kubernetes
and
open
shift,
whether
it's
network
function,
virtualization,
which
I'll
talk
mostly
about
today,
and
you
just
saw
a
presentation
on
GPUs
that
was
last
last
year's
lame
war
and-
and
we
know
so,
hpc
and
all
these
other
workloads
have
similar
accelerator
requirements
or
similar
high
performance
requirements.
C
So
there
you
go,
coordinate
and
plumb
these
generically,
here's
some
market,
research
that
we
have
done
and
some
anecdotal
feedback
we've
collected
from
customers
over
the
last
couple
of
years.
This
is
a
kind
of
an
eye
chart,
so
I
apologize,
but
what
I'm
trying
to
make
two
points?
One
there's
plenty
of
overlap
between
the
different
market
segments
as
a
team.
That
would
be
a
good
place
to
start
because
it's
super
high
value.
The
second
piece
on
which
is
the
bottom
graph,
is
across
all
these
public
clouds.
C
A
lot
of
these
capabilities
already
are
already
exist,
or
they
are
on
the
roadmap
for
inclusion
immediately.
So
this
is
a
bare-metal
talk
will
stick
to
bare
metal,
but
all
these
other
cloud
providers
offer
things
like
this
and
I
apologize.
If
anyone's
here
from
a
cloud
provider
and
I've
got
your
cell
incorrect,
see
me
afterwards
and
we'll
have
the
flogging.
Okay.
So
next
slide,
please
here's
the
second
land
more
for
kubernetes.
C
From
my
perspective,
it's
it
started
about
two
years
ago
and
we
had
discussions,
maybe
even
longer
than
that
we
had
discussions
in
the
upstream
Signet,
where
community
around
multiple
networks,
who
here
has
one
single
interface
on
all
of
their
computers,
and
they
will
never
ever
need
a
second
one.
So.
C
Let
the
record
show
there
were
zero
hands.
So
that's
why
we
like
that's
pretty
obvious
right,
because
I
came
from
a
datacenter
provider.
Every
server
had
like
ten
NICs
in
it
and
it
was
just
like
it
get
a
storage
network.
You've
got
management
network.
You've
got
our
demand,
management,
you've
got
prod,
you
know
whatever
there's
like
and
they
each
one
is
bonded
right.
C
So
there's
like
eight
interfaces,
minimum
it's
pretty
standard
in
public
cloud,
that's
sort
of
all
extracted
away
from
you,
but,
let's
just
say,
you're,
building
on
bare
metal,
which
is
why
we're
here
today.
So
we
talked
about
upstream.
We
couldn't
necessarily
come
to
an
agreement
on
yet
on
what
the
weather
kubernetes
needs
to
know
about
multiple
networks.
So
there
was
the
Intel
team
founded
a
project
called
Malta's
which
allows
kubernetes
to
call
more
than
one
CNI
driver.
That's
the
main
function
of
it.
C
Kubernetes
right
now
can
only
call
one
Malta's
can
call
more
than
one
with
that
base
of
understanding.
That
project
was
founded
about
two
years
ago
in
the
interim
time
kind
of
skipping
fast-forwarding.
Here,
in
the
interim
time,
the
Intel
team
worked
on
Malta's,
v1
and
v2,
and
by
the
way,
if
I
think
there
were
some
multivator
in
the
room,
Malta
says
again
available
on
Intel's,
github,
all
open-source
and
so
forth.
We
should
be
able
to
put
you
in
touch
with
one
of
the
developers.
C
If
you
have
specific
feature
requests
on
Malta's
last
year
at
cube
con
in
texas,
where
it
snowed,
I
don't
know,
I
guess
I
showed
up
there.
It
snowed
in
Raleigh
I,
just
escaped
the
the
biggest
storm
of
the
century
down
there,
and
we
did
a
face-to-face
between
in
town
Red
Hat,
because
we
felt
we
should
align
behind
some.
We
should
align
behind
some
open-source
project
and
kind
of
help
to
break
the
logjam
of
what
the
upstream
community
was
kind
of
stuck
on.
So
at
that
time
we
talked
with
them.
C
We
kind
of
agreed
gentlemen,
ladies
and
gentlemen,
agreement
that
we
would
work
together
on
the
upstream
in
Malta's
and
at
the
same
time
we
didn't
want
to
leave
the
upstream
community
out
of
the
loop.
We
what
we
founded
a
new
working
group,
dan
Williams
from
Red
Hat,
helped
pushed
forward
the
network
plumbing
working
group
which
meets
every
I.
Don't
know
every
other
Tuesday
I
believe
at
some
ungodly
hour,
and
we
through
that
community
have
established
some
important
beachheads
in
this
space
one
we
have
so
you
guys
heard
a
lot
about
custom
resource
definitions.
C
This
morning
we
have
a
v1
of
an
API
spec
from
multiple
network
attachments.
That's
pushed
into
kubernetes
SIG's
github
work
that
took
a
long
time,
but
that
was
one
of
the
major
deliverables
from
the
last
12
months.
So
now
we
have
a
you
know,
pseudo
standard
that
vendors
can
rally
around
if
they
need
to
attach
another
network
interface.
Here's
the
spec
that
you
write
to
and
it'll
be
it'll
be
portable,
which
is
I,
guess
the
reason
for
any
spec.
C
So
Intel
worked
heavily
heavily
with
us
on
that.
In
fact,
we
had
we
had
a
shared
Trello
board
that
the
teams
and
bi-weekly
meetings.
So
there
was
a
tremendous
amount
of
engagement,
not
only
in
the
upstream
but
also
red
hadn't,
until
working
together
on
the
backend,
so
that
was
most
of
the
first
half
of
this
year.
We
finalized
the
spec.
We
finally
started
putting
it
to
work.
C
We
have
a
demo
at
ons,
which
was
in
Amsterdam,
open
working
summit,
I
believe
right,
Dave,
yeah
dave
was
there
open
networking,
summit
and
Amsterdam
and
that
showed
Malta's
running
on
kubernetes.
We
actually
have
another
demo,
two
other
demos,
Multan
shift
and
a
so
that's
like
a
functional
demo,
and
then
we
have
a
performance
test
of
sr
iove
verse,
sr
iove
on
on
bare
metal
actually,
and
that
is
in
the
Intel
and
Red
Hat
booths.
So
on
Tuesday
and
Wednesday
we
have
demos
scheduled,
believe
it's
I'm,
sorry,
I'm
gonna
get
it
wrong.
C
We
have
demos
scheduled
on
Tuesday
and
Wednesday
of
both
of
those
pieces
of
Technology.
So
if
you're
interested
in
multiple
networks-
and
you
want
to
talk
to
the
guys
putting
it
together,
our
booth
or
Intel's
booth
is
a
great
place
to
find
a
great
place
to
discuss
with
the
folks
who
have
the
most
skin.
In
the
game,
okay,
finally,
what
I
can
talk
about
now
as
of
OpenShift
floor
dado,
since
we
announced
it
this
morning
is
we're
planning
on,
including
all
of
that
work
from
the
last
year
doesn't
go?
C
You
know
in
the
waste
bin
we're
going
to
include
Malta's,
3.0
or,
let's
just
say,
3x,
in
open
shift
for
we're
going
to
create
operators
around
it.
We're
gonna
create
a
device
plugins,
obviously,
along
with
Intel
and
admission
controllers
and
they're,
all
containerized
and
so
forth.
That's
something
you
can
see
in
the
demo
now
the
user
interface.
C
Let's
hope
it's
not
a
mess,
so
what
we
want
to
do
is
add
Malta's
as
a
feature
to
so
this
morning
it
was
mentioned
I,
think
by
Mike
that
Mike
Barratt,
that
every
scrum
team
inside
openshift
has
pivoted
and
started
writing
operators.
Ours
are
included
in
that,
and
so
we
have
glommed
onto
the
SD
ends.
Team
SDM
teams
work
to
build
a
network
operator
for
open
shift
which,
if
you
hit,
try
dot
OpenShift
comm.
C
Today
you
will
get
operator
and
we
will
be
a
feature
in
that
operator,
so
you
will
be
able
to
express
through
these
CR
DS
an
additional
configuration
and
you
will
be
able
to
plumb
fast
data
path
into
your
pod
two
years
later
or
finally,
nearing
product
ization
and
the
works
not
done
I
will
admit
there
is
more
to
do,
but
we
can
finally
get
fast
data
and
you'll
see
how
fast
it
is.
If
you
stopped
by
Intel
or
Red
Hat's
booth.
C
Okay,
last
bit
of
the
story
is
some
other
things
we've
been
working
on,
so
that's
like
I,
don't
know
with
whether
it's
the
left
half
of
my
brain
already
half
of
my
brain
is
over
doing
that.
The
other
side
is
working
on
the
upstream
resource
management
working
group,
which
was
2016
land
more
and
we've
got
out
of
that
delivered.
We
didn't
deliver
Numa
apology,
yet
because
Derrick
has
been
busy,
but
maybe
he'll
get
to
that
CPU
pinning
and
device
plugins.
C
As
you
know,
the
other
thing
that
we're
bringing
into
florida,
which
I
have
known,
is
a
missing
piece
in
kubernetes.
For
since
2015
is
some
kind
of
hardware
discovery
agent,
if
you've
bought
brand
new
Intel
chips
and
your
developers
have
optimized
their
code
for
avx-512
and
you're
running
on
a
machine
that
only
has
avx2
you're
leaving
performance
on
the
table
and
the
way
I
like
to
talk
about
performance
is
the
faster
your
who
here,
who's
developers
like
their
apps
to
run
slow,
who,
like
here,
who
here
likes
to
spend
more
than
they
need
to.
C
That's
where
your
performance
helps.
You
will
reduce
costs,
improve
density
and
you
will
keep
your
developers
and,
more
importantly,
your
customers
happy
so
to
optimize
where
the
applications
run.
We
need
some
kind
of
hardware
bootstrapping
agent,
which
is
not
we're
not
getting
any
patents
on
this
stuff.
It's
just
it's
pretty
obvious.
If
you
ask
me
Intel
put
together
a
project
called
node
feature
discovery
which
both
US
and
some
other
vendors
are
kind
of
rallying
behind
as
a
tool
that
discovers
hardware
and
publishes
it
up
through
the
scheduler
as
something
you
can
route.
C
Your
work
with
currently
uses
labels,
whether
or
not
it
continues
to
loot
and
use
labels.
I
don't
know,
but
that
will
also
be
an
open
shift
for
and,
as
you
can
guess,
we
have
an
operator
for
it.
So
that's
what
sig
node
has
delivered
and
the
resource
management
working
group
which
we
work
with
heavily
with
Intel
and
then
sig
network
has
put
together
the
plumbing
working
group
which,
by
the
way
you
should
participate
in.
If
any
of
this
has
rung
a
bell
for
you
more
than
happy
to
get
real
real
users
in
there.
C
I
would
love
to
have
real
users,
and
you
know
playing
with
these
demos
and
giving
us
feedback.
The
CRD
spec
is
kind
of
an
implementation
detail
wouldn't
get
too
much
into
unless
you're
really
really
like
a
hardcore
engineer.
That
would
be
something
useful
to
contribute
that
too
and
then,
of
course,
we've
got
multi,
delivering
a
multi
3x
in
open
shift
for
dough,
so
that
together
should
hopefully
Telegraph
the
fact
that
Intel
and
red
had
it
been
working
very
closely
and
will
continue
to
work
closely,
including
a
hack
day.
C
B
A
B
A
As
associated
automation
through
the
form
of
ansible
playbooks,
that's
available
in
the
intel,
github
page
and
a
you
know,
two
page
executive
solution
brief
to
accompany
that,
and
I
wanted
to
just
walk
you
through
really
what
the
workflow
is
kind
of
what
the
automation
is
with
that
with
the
time
that
we
do
have
left
really.
The
prerequisite
here
is.
This
hardware
is,
of
course,
racked
stacked,
cabled
and
powered
on.
A
You
know
we
can't
change
physics
from
that
standpoint,
but
you
know
we
have
the
hardware
rack,
stacked
and
cabled,
and
at
this
point
you
consume
the
reference
architecture
in
the
automation.
You
identify
the
management
node.
We
typically
have
that
denoted
as
either
the
ansible
installer
or
the
bastion
node.
So
this
is
the
you
know:
workload
deployer
node,
that's
that
gets
identified.
A
You
configure
and
provision
it,
but
rel,
seven
on
there
and,
like
I,
mentioned
those
github
ansible
playbooks
are
there
to
really
help
you
with
some
of
the
initial
steps
right
there
to
configure
this
as
the
bastion
node
right
there
and
you,
you
know,
subscribe
them
and
you
download
and
clone
those
you
know
relevant
playbooks
that
are
there.
So,
with
the
initial
reference
architecture,
we
used
Arista
top-of-rack
switches
there
and
so
we're
actually
leveraging
some
o
some
ansible
code.
That's
there
in
galaxies,
actually
provision
and
configure
those
top-of-rack
switch
modules.
A
It
goes
so
after
that
we
provision
Red,
Hat
Enterprise
Linux
on
the
rest
of
the
nodes
that
comprised
that
cluster
right
there,
and
so,
like
I,
said
the
top
of
racks,
which
modules
are
configured
and
so
at
this
point
we're
gonna
start.
Two
containers
are
here
just
very,
very
lightweight
ones,
an
eye
pixie
deployer,
so
it
just
runs
at
DHCP
and
a
pixie
server,
and
we
have
an
engine
X
web
server
that
hosts
kickstart
files
that
are
provided,
as
you
know,
sample
and
auxilary
material
on
the
github.
A
So
you
just
it's
all
contained
a
part
of
that
repo
and
you
make
customizations
there
a
container
start
and
we're
communicating
through
ipmi
waking.
Those
systems
up
and
they're
being
discovered
by
the
deployer
and
rel
is
being
deployed
on
there,
similar
to
how
we
use
OpenShift
ansible
today
to
denote
which
nodes
are
masters
which
ones
are
infrastructures
and
which
ones
are
application
nodes
or
modifying
our
Etsy
ansible
host
file
to
really
accommodate
the
you
know,
openshift
ansible
playbooks,
which
that's
the
same
tooling
and
methodology
we
use
currently
today
to
deploy
openshift.
A
Today,
partner
Lenovo
worked
with
us
one
of
their
initial
partners
on
version
3.5
of
the
architecture,
they've
just
refreshed
that
to
version
3.9
of
OpenShift,
cisco
has
worked
with
us
on
a
cisco
validated
design
here
on
the
UCS
platforms
is
around
April
March
timeframe
of
this
year
and
our
partners,
Dell
EMC,
worked
with
us
on
a
reference
architecture
on
Dell,
PowerEdge
hardware
and
wanna
point
points.
Some
of
those
folks
are
in
the
room
here
today
with
us.
Remember
these
are
validated
designs.
B
A
Good
well
I
hope,
I
hope
this
gives
you
at
least
some
maybe
examples
or
guidelines
as
to
the
work
that
we've
been
doing.
Hopefully
that's
helpful
one
of
their
call
out.
I
wanted
to
make
specifically,
as
the
openshift
comments
slack
channel.
Just
you
know,
I
always
like
to
say
we
grow
when
we
share
and
a
lot
of
the
folks
that
worked.
You
know
jointly
with
us
on
these
solutions
that
I
mentioned
are
on
those
channels
right
there.
So
you
know
happy
to
engage
in
further
conversation
after
this
off
the
stage
or
in
the
interwebs.
B
One
more
shout
out
for
the
booth,
so
the
persistent
memory
use
case
we
talked
about
here
will
be
shown
at
the
Intel
booth
as
well
as
a
close
approximation
to
the
STL
envoy
story.
We
weren't
able
to
land
all
of
those
changes
into
the
demo
that
we
set
up,
but
we
did
do
it
with
a
che
proxy.
So
it's
a
very
close
approximation
of
what
what
you'll
see
with
with
envoy.
So
that's
also
running
at
our
booth
and
then,
as
the
guys
mentioned,
we
also
have
multi
at
our
booth
and
at
the
Red
Hat
booth.