►
From YouTube: From the Cloud to the Edge with David Bericat Erik Jacobs Red Hat OpenShift Commons Briefing
Description
OpenShift Commons Briefing
From the Cloud to the Edge:
Why Cloud Native Application Development Matters in Supporting IoT and Edge Computing
David Bericat
Erik Jacobs Red Hat
July 18 2019
A
Well,
hello,
everybody
and
welcome
again
to
another
OpenShift
Commons
briefing,
this
time
we're
going
to
go
from
the
cloud
to
the
edge
I
heard
this
talk
earlier
and
I
thought
it
would
be
really
good
to
reprise
it
as
an
open
shift,
Commons
briefing
so
that
we
could
get
it
out
to
a
wider
audience
and
maybe
get
a
little
bit
of
an
update
from
David,
bearcat
and
Eric
Jacobs
and
the
format
will
be
we'll.
Do
the
presentation
and
afterwards
we'll
have
live
Q&A.
A
B
C
All
right,
so
today's
talk,
we're
gonna,
be
talking
about
IOT
and
the
edge
whole
picture
and
just
kind
of
the
level
set
really
quickly.
They
talk
about
IOT,
a
lot
of
people
think
about
especially
if
you're
in
the
IT
industry,
you
hear
IOT
and
you
think
about
you,
know
industrials
the
factories
or
whatever,
and
the
reality
is
that
the
internet,
the
internet,
really
is
all
things.
These
days,
we
have
all
of
these
Isis
in
our
homes.
C
Our
cars
themselves
are
now
internet
connected
in
many
cases
and
they're
all
busy
sending
data
back
to
some
mothership
somewhere
that
somebody
thoughtfully
and
carefully
designed
the
system
to
do
because
they
have
some
end
goal
in
mind
and
so
for
you
in
the
IT
industry.
You
know
what
device
may
come
into
your
perspective.
That's
gonna
need
to
be
a
part
of
this
Internet
of
Things.
B
Thanks
Ari,
so
did
you
see
Tommy
talk
that
we
need
last
year
actually
in
May
last
May
in
Boston?
It
feels
like
that.
Yes,
we
are
gonna
quickly,
that
space
on
has
been
doing
IOT
for
the
past
I,
guess
through
you,
how
we've
been
living
in
more
and
more
what
we
call
him
to
another
Haiti
alladhi
analytics
or
at
the
piece
of
intelligence,
machine
learning
and
different
another
war
load
on
top
of
the
platforms
and
we'll
talk
about
what
cloud
computing
means
to
us.
B
So,
quick,
that's
based
on
the
strategy
you
know
by
now
we
are
a
platform
company
and
we
are
in
the
open
having
cloud
business.
So
we
want
to
provide
that
infrastructure
for
you
to
put
all
your
services
and
wor
load
on
top
of
it.
We
actually
have
do
not
have
an
internal
things
platform
and
we
are
not
planning
on
having
one.
B
B
What
we
do,
though,
is
we
are
very
active
in
different
open
source
communities
like
the
Eclipse
IOT
working
group,
the
Linux
Foundation,
with
projects
like
a
la
coubre,
chrono
and
a
bunch
of
others.
So
our
engineers
are
working
hard
with
some
other
partners
into
developing
the
next
generation
of
those
projects
in
different
architectures,
and
then
we
we
go
to
build
those
platforms
with
our
partners,
both.
C
So
what
we
want
to
start
with
is
kind
of
a
high-level
tional
overview
of
what
we
think
good
eye.
O
t--
solution
is
and
then
we'll
sort
of
start
to
dive
down
into
the
architecture
and
then
into
point
solutions
for
each
of
the
different
functional
areas
and
so
reading
this
diagram
from
left
to
right.
We
start
with
the
device,
but
it's
more
than
just
the
device
itself.
In
many
cases
the
device
actually
has
to
be
provisioned.
C
It
has
to
be
the
device
needs
to
be
told
how
to
connect
to
the
network,
or
you
know,
authenticate
to
the
systems
that
it
needs
to
activate
and
so
on
and
so
forth.
All
of
that
management
and
connectivity
is
a
big
piece
of
the
puzzle
and
then,
as
we
move
to
the
right,
we're
getting
closer
and
closer
to
one
premises
to
our
own
data
center.
C
If
you
will-
and
the
first
thing
we
hit-
is
what
Red
Hat
likes
to
think
about
as
being
the
intelligent
edge
where
we
can
start
to
do
some
processing,
and
so,
as
we
begin
to
move
closer,
we
get
to
the
intelligent
edge
with
processing
analytics.
You
may
not
always
want
to
wait
for
data
to
hit
the
data
center
before
you
act.
C
We
start
to
apply
advanced,
analytics
and
and
begin
to
train
our
machine
learning
models
and
feed
data
into
them,
as
we
want
to
perform
more
complex
interactions
with
the
information
that's
coming
back
to
us
from
the
devices
and,
lastly,
we
have
the
core
of
our
infrastructure,
our
business
and
application
integration.
This
is
where
things
like.
Oh
yes,
we
responded
instantaneously
at
the
edge.
C
C
You
know
secure
as
it
transits
through
this
system
is
as
crucial
as
well,
and
so
that's
our
kind
of
big
picture
view
of
the
functional
system
and,
as
we
start
to
look
kind
of
closer
to
the
actual
architecture.
What
we
see
is
that
you
know
we
may
have
many
many
many
IOT
edges,
depending
on
the
scale
of
our
solution,
depending
on
geography.
You
know
these
other
things
we
may
have
more
than
one
edge,
that's
connecting
to
a
specific
subset
or
devices
and.
D
C
Conceptually
we
have
an
integration
hub
which
is
sort
of
fewer
in
number
than
the
edges,
but
the
integration
hub
is
where
we
begin
to
assemble.
All
of
the
information
that
comes
to
us
from
these
different
edge
points
and
start
to
feed
it
into
our
data
management.
Analytics
platforms
start
to
feed
it
into
our
internal
applications.
B
Thanks
Eric,
so
there
are
so
many
different
problems
here
that
we
tend
to
group
them
by
from
two
different
angles.
So
we
have
what
we
call
the
data
plane,
which
is:
how
are
you
going
to
acquire
the
data?
Are
you
going
to
move
all
that
data
from
those
different
devices
and
different
entities
to
whatever
location
you
need
with
a
centralized
or
these
two?
Well?
Are
you
going
to
filter
that
data
only
saying
the
right
data
the
wrong
moment,
other
right
location
over
what
channel?
B
Not
all
the
data
is
equally
important
and
you
probably
want
to
throw
pre-process
all
that
they'd
tell
you.
You
take
the
action
as
you
go,
but
you
only
send
the
really
meaningful
information
to
that
central
location
when
you
think
about
you're
deploying
another
new
infrastructure
out
there
at
the
edge
there's
the
control
plane
problem,
which
is
how
do
you
manage
of
that?
How
do
you
make
sure
that
those
devices
are
not
compromised?
How
do
you
post
updates
in
practice,
and
how
do
you
make
sure
that
you
change
the
configuration
dynamically
depending
on
different
conditions?
B
How
do
you
push
a
new
machine
learning
models
or
new
business
intelligence
once
you've
been
able
to
improve
those
business
processes?
So
that's
what
we
do
with
the
control
plane
over
RT
know.
This
is
the
whole
concept
of
from
data
to
information
to
business
decision
track.
It's
those
algorithms
and
those
applications
and
micro
services,
and
push
them
to
leave
as
close
as
possible
to
the
data.
So
you're
not
you've
been
respectful
with
latency
and
low
latency
and
real-time
applications.
How
do
you
make
those
as
heat
as
fast
as
possible?
B
B
You
probably
won't
do
that
all
at
the
same
time,
so
people
will
start
from
the
things
in
Envy
lab,
and
so
people
will
come
more
from
my
data
and
well.
That
is
fine,
but
the
end
of
the
day.
This
is
how
you
I'll
be
able
to
make
the
most
of
the
of
the
end
to
an
architecture.
You
take
that
to
the
next
level.
B
We
as
Rahab
believe
that
we
have
a
lot
of
different
parts
of
our
portfolio
they're
meeting
to
provide
those
functional
building
blocks,
obviously
we're
in
the
open
and
see
if
common
briefings,
though
you
never
see,
opens
if
all
over
the
place.
We
believe
it
is
a
perfect
platform
and
to
provide
that
the
hybrid
orchestration
and
to
deploy
all
those
services
and
push
them
to
the
edge.
B
So,
most
of
the
time
we
consider
internal
things
to
be
an
integration
problem.
It's
no
surprise
to
anybody
with
an
I
think
data
from
different
sources
and
and
patient
systems
and
we're
gonna
get
all
that
real-time
data,
it's
gonna
flow
speaking
different
protocols,
like
you,
saw
suspects,
MQTT
a
stick
in
a
couple.
Others
you're
gonna
be
able
to
get
all
that
telemetry
data
and
flow
that
the
rest
of
your
applications,
and
you
also
need
to
need
to
be
able
to
send
those
command
and
control
message
if
back
and
forth
across
the
architecture.
B
So
we
believe
in
rehab
that
amqp
1.0
is
a
good
way
of
standardizing
from
all
the
different
protocols
are
coming
from
the
AIDS
it
once
you're
in
the
hybrid
cloud
environment
and
you
move
more
from
two
to
an
even
driven
architecture,
type
of
type
of
a
scenario.
So
in
rehab
we
put
a
lot
of
effort
into
our
next
generation
of
messaging,
runs
natively
on
open
sea
and
on
kubernetes.
You
can
do
a
scale
out
in
a
scale
down
based
on.
B
Messages
and
number
of
connections
that
you
have
you
can
SP
different
protocols
such
as
dead.
Like
a
DDT,
you
can
ingest
all
that
data
with
something
like
Kafka
or
the
Yankees
dreams.
You
gotta
go
more,
the
broker
relates
architecture,
doing
dynamic
routing
or
you
need
up
a
broker,
because
you
want
to
play
more
of
the
producer
and
consumer
type
of
scenario
messaging.
You
can
do
that
as
well.
All
you
know,
once
you
have
all
that
data,
and
yesterday
then,
is
there.
We
use
a
fuse.
C
On
my
laptop,
your
developers
are
working
internally
premises
at
the
core
of
the
data
center
with
openshift
building
their
applications
testing
them
there,
and
then
you
just
take
those
containers
and
push
them
out
towards
the
data
management
platforms
or
the
IOT
integration
hub,
depending
on
the
hardware
resources
that
are
available
at
the
iot
edge.
This
can
potentially
extend
there
as
well
or
depending
on
how
you
build
these
containerized
shion's.
C
C
C
A
lot
of
other
services
around
continuous
delivery,
both
the
automated
build,
as
well
as
deployment
native
pipelines,
integrations
as
well
as
lots
of
configuration
management
features,
bundled
with
application,
logging
and
application
application
monitoring.
And
so
again,
we
feel,
like
openshift,
provides
kind
of
a
one-stop
shop
to
fit
across
the
entirety
of
your
environment.
C
Now,
with
the
work
that
we've
started
to
do
previously
with
chaƩ
and
what
we're
rebranding
as
code
ready
workspaces,
we
additionally
give
you
the
ability
to
protect
the
source
code,
that's
being
written
by
these
developers,
because
their
IDE
essentially
is
in
the
browser.
This
obviates
the
need
and
a
lot
of
the
effort-
that's
been
placed
into
VDI
over
the
last
many
years,
because
now
we
don't
have
to
worry
about
the
whole
desktop
environment.
C
We
really
just
care
about
the
development
environment
and
that's
provided
VHA
in
this
product
that
we're
solution
that
we're
calling
hood
ready
workspaces,
which
is
bundled
natively
with
OpenShift,
as
well
as
being
tightly
integrated
with
openshift.
So
it
makes
it
even
easier
for
your
active
teams
to
get
up
and
get
going
with
openshift
as
a
solution.
B
So
we
we
get
the
we
get
all
questions
about
what
it
means,
or
we
had
you
as
many
people.
Probably
they
have
25
different
ways
of
calling
it.
It
really
depends
on
everything
that
is
not
on
the
hybrid
cloud
or
a
data
center.
It
can
be
considered
on
heads
you're
more
in
the
teleport
space,
you're
thinking
more
as
a
service
providing
and
talking
about
robotics
computing
type
of
use
case.
B
It's
all
things
connected
to
different
its
factors
can
be
really
complex
already
right.
So
we
try
to
give
framework
saying:
please
apply
common
sense,
centralized
where
you
can
and
distribute
where
you
must
think
centralized
always
he's
here,
but
sometimes
depending
on
and
fun
with
latency
and
resiliency,
and
where
the
data
is
and
how
much
data
you
can
move
out
of
where
you're
living
vessel
in
your
in
in
industrial
scenarios,
sometimes
that
data
kind
of
leave
the
factory
and
things
of
that.
B
When
you
take
that
to
the
next
level,
if
we
definitely
can
have
a
different
conversation
about
what
the
different,
however,
form
factors
look
like
how
much
CPU
memory
stored
at
Exeter
Exeter
you
have
with
that,
is
so
more
define
or
what
there
are
spiritual
eyes
or
is
very
middle
you.
You
know
that
the
beauty
of
the
platforms
are
gonna,
be
underpinning
the
easy
you
will
want
you
deploy
anywhere.
B
We
believe
that
the
open
sea,
possibly
there,
they
are
also
in
a
minute
depending
on
the
constraints
of
the
environment,
infrastructure
or
hardware
that
you
have
underneath
we
talk
a
lot
open
sieve
right.
So
this
background
native
talk
as
well.
We
wanted
to
say:
okay,
yeah
talk
about
an
example.
Let's
talk
about
several
lists,
let's
talk
about
how
things
like
an
ad
or.
A
B
C
C
That's
going
to
be
repurposed
and
tightly
integrated
with
out
Tecton
works,
and
then
once
you
have
that
application
container,
really
that
that
function
container,
if
you
will,
the
next
two
pieces
are
the
serving
piece
and
the
eventing
piece
unsurprisingly
serving
is
the
actual
thing
that
serves
the
function.
This
is
the
part
of
the
infrastructure
that
understands.
There's
requests
coming
in
for
the
function,
I
need
more
capability
to
handle
them
or
there's
no
requests
coming
in
for
the
function.
I
am
going
to
scale
to
zero
on
the
eventing
side.
C
Native
and
K
native
eventing
is
essentially
a
common
infrastructure
for
handling
events
and
what
are
events
well?
They
could
be
just
about
anything,
but
he
commits
a
road
to
a
database
that
could
be
interpreted
as
an
event
where
data
comes
in
from
an
external
source
and
hits
our
message
queue
that
it
could
be
a
type
of
event
and
Kay
native
eventing
provides
a
way
to
not
only
have
these
individual
events
but
to
also
chain
them
together
and
build
more
complicated
workflows
for
how
to
respond
to
things
that
are
happening
in
our
environment.
B
So
we
have
the
we
have
a
civilized
building
block
right
now
we
can
ad
in
OPC
and
then
we
we
sort
of
we
started
to
think.
Okay.
How
do
we,
how
we
really
build
a
I
mean
we
did
a
poll
here
on
the
on
the
audience?
Probably
nobody
would
guess
that
you
can
do
any,
and
people
usually
got
the
Python
or
the.
D
B
B
He
stopped
because
he
it
wasn't
designed
to
be
validated
right.
So
how
can
we
do
something
with
it
and
they
came
up
with
this
new
absolute
project
called
quad
coos,
which
the
way
they
call
it?
The
supersonic
super
super
tommix
out
Java.
You
probably
have
heard
several
talks
from
verso
there
about.
The
topic
is
very
personally
about
it.
I
think
it's
a
perfect
fit
in
this
type
of
environment.
It's
very
lightweight
is
kubernetes
native.
It
runs
natively
or
an
open,
jdk
natively
I
mean
it
runs
like
if
it
were
see
or
c++.
B
So
your
memory
consumption,
how
fast
it
starts
and
stops
how
much
memory
you're
getting
it's
exactly
what
you
need
to
when
you
have
those
small
functions
and
services
in
really
constrained
environments
like
in
its
competing
in
all,
you
need
to
know
things
or
it
can
be
a
gateway
state
law
connected
to
a
machine
in
the
in
the
most
extreme
environment
right.
So
how
is
that
possible?
So,
basically
we're
using
the
growl
compiler
a
little
machine.
You
kick
off
one
of
these
pipelines,
so
you
get
the
Java
jar.
B
We
produce
one,
a
bunch
of
the
different
frameworks
that
you
see
a
couple
of
logos
that
are
there
we
symbol
all
that,
then
we
decide
whether
we
want
that
to
be
running
on
open,
JDK
we'd,
say,
obviously
it's
more
portable
in
a
standard,
but
you
pay
are
going
to
be
a
premium
or
you
want
to
run
that
natively
in
the
Box
I
out
the
other
day.
It's
gonna
be
right,
so
we
keep
building
from
sensors
double
field.
B
We
have
Linux
containers,
we
have
kubernetes,
we
have
K
native,
we
have
quark
OU's
and
then
is
like
okay.
This
is
great,
but
we
said
the
internet
thinks
it's
all
about
integrating
different
data,
so
patients
are
whether
that
well,
needless
to
say,
we
we
love,
camel.
We
love
being
able
to
get
all
the
different
components
or
invent
sources
and
use
the
enterprise
integration
patterns
to
do
that.
Filtering,
pre-processing,
standing
the
data
after
different
channels
and
through
different
subscribers.
B
You
will
so
how
can
we
create
something
that
is
super
like
way
that
is
based
on
camel
and
we
mix
the
best
of
both
worlds?
Well,
we
came
back.
We
come
okay,
which
is
chronic,
camel
the
song
in
a
comedy
way.
It
is
based
on
the
operators
SDK.
It
runs
on
open
seas
and
kubernetes.
So
you
can
define
those
what
we
call
integration
functions,
which
is
a
lot
more.
A
lot
closer
to
publishers
are
services.
B
You
will
go
from
the
integration
standpoint,
so
if
you
follow
close,
if
seen
in
all
those
guys,
they
have
a
lot
of
different
demos
where
you
can
see
the
start
up
memory,
how
those
integration
functions
scalene
and
out
the
super
easy
to
describe
from
a
typical
one,
the
deployment
and
every
deployment
time.
It's
it's
crazy.
It's
literally
almost
nothing
and
by
running
it,
natively,
improving
at
ease
and
and
no
conceive
you
effectively
can't
handle
all
the
different
bikes
in
war
loads.
B
Coming
from
the
different
data,
once
you
have
ingested
that,
how
do
you
make
sure
that
all
your
different
micro
services
and
applications
it
has
to
access
all
that
data?
When
you
have
a
massive
tail
and
you're
a
global
company,
we
believe
excellent
solution
that
definitely
can
run
their
idea
test
well,
and
you
break
the
big
problem
into
smaller
chunks,
which
is
always
a
good
idea.
B
We
sort
of
created
a
dependency
of
ordinary
Steve
as
that
application
orchestra,
but
we
don't
have
to
write,
is
great
and
but
there's
going
to
be
some
environments
in
the
northeastern
areas
computing.
What
we
call
the
device
edge
where
kubernetes
is
probably
today
not
ready
to
run
it's
a
lot
of
research
going
on
by
others
and
some
others
to
do.
A
more
lightweight
version
of
kubernetes
or
corner
needs
to
be
working,
not
thinking
that
it
is
on
a
data
center,
but
one
or
any
cluster
that
is
self-sufficient
and
disconnected.
B
There's
also
research
going
on
in
that
space.
But
at
a
minimum,
what
we
can
do
if
we
get
the
quad
coos
as
you're,
a
building
block
sure
you
know
how
can
any?
If
you
don't
have
the
surveillance
point
running
on
top
of
kubernetes,
but
you
still
have
Linux
containers
at
the
edge.
You
can
run
an
operating
system.
That
has
that,
obviously,
if
his
right
hand,
the
price
Linux
we're
going
to
be
happy,
but
no
need
to
a
rail
is
not
also
able
to
run
in
every
IOT
scenario.
C
Thanks
David,
so
that
brings
us
to
the
end
of
our
slides
demo
or
anything,
but
we
do
have
a
bunch
of
resources
that
you
can
visit
to
actually
start
to
learn
about
some
of
these
technologies
and
get
hands-on
experience
with
them.
Those
URLs
are
here
in
this
slide
David
we
did
have
a
question
about
existing
IOT
and
OpenShift
implementations.
I
remember
last
year,
not
this
year,
but
last
year
at
Summit
we
had
some
partner
stuff
that
we
were
showing
off.
Can
you
maybe
talk
about
some
examples
of
yeah.
B
Yeah
sure,
oh
yeah,
we
we
just
record
without
with
different
partners.
We've
been
working
with
different
customers
in
different
industries
early
for
the
past
two
three
years.
We've
done
quite
a
bit
of
work
with
telcos.
Obviously
my
first
work
with
was
with
Telefonica
back
in
2016
and
there's
our
common
public
talk
in
Dratch
I
mean
so
we
don't
have
experiencing
marci
these
type
of
environments.
We
are
partying
or
smart
transy
and
things
of
that
nature.
B
Recently
we've
been
pushed
more
towards
the
energy
space
and,
in
particular
in
the
industrial
for
the
lower
space.
Last
year,
I
had
summit,
we
had
a
talk
that
is
public,
you
can
go
to
youtube
and
tap
or
we
can
send
out
over.
The
distribution
is
Diane
just
Ramirez
with
boss
connected
industry.
They
have
270
different
factories
all
over
the
place
and
and
obviously
they're
the
so
they're,
also
making
things
stable
and
reliable
and
secure
so
they're.
B
The
factories
are
unlike
bunkers,
nobody
can
get
in
and
out
so
with
a
net
computing
type
of
use
case
deploying
different
IT
services.
In
one
of
those
super
secure
networks,
we
did
a
pure
open
source.
It
was
a
POC
in
four
days,
which
is
pretty
impressive
when
they're
connected
a
small
machine
to
one
of
the
gateways
and
we're
able
to
get
all
that
data
and
flow
that
they
are
building
that
the
pipeline.
We
do
that
with
our
partners.
B
You
wrote
a
Content
area
with
a
great
implementation
of
the
IDE
and
reference
architecture
that
that
we
just
showed
so
more
details.
Please
go
there.
We
also
have
a
couple
more
examples:
healthcare
and
energy,
where
they
basically
want
to
connect
the
different
videos
and
all
weeks
and
things
like
that.
B
B
A
B
C
They
they
collect
IOT
data
from
cattle,
they
have
that
goes
in
a
cow
and
I
think
it's
like
each
cow
generates
10
Meg's
of
data
a
day
or
something
like
that,
and
they
pull
in.
Like
5
petabytes
of
data
a
year.
It
was
just
totally
insane
numbers
of
information,
but
that
was
that
was
a
pretty
cool
use
case.
Fine
has
a
question
about
managing
multiple
clusters
running
at
the
edge
I
would
say
that
the
David
made
a
holy
grail
comment
earlier.
I
would
say
that
ulti
cluster
management
is,
is
the
current
holy
grail.
D
B
Couple
a
couple
of
additions
to
that
blog
start,
open
sea,
calm,
I,
think
we
release
diversity
of
multicast
affiliation
now
now
being
called
cube,
faith
check
it
out.
I
was
just
reading
this
morning.
The
second
thing,
the
second
thing
that
that
space,
that
that
is
really
interesting
is
we
Diana
I
do
believe
that
there
was
Italian
colleagues
did
use
case
in
Italy,
where
they
did
have
kind
of
distributed
working
notes.
Obviously,
that
was
more
of
a
B
or
C
environment,
but
it
was
a
real
ghost
case
with
a
customer
with
where
you
had
the
Masters.
B
Morally
centralized.
You
had
different
clusters
distributed
in
factories
and
they
had
sort
of
a
synchronous
way
of
taking
the
images
once
peeled
from
the
master
in
synchronize
that
with
private
registries
at
the
edge
and
from
that
they
were
using
until
all
this
was
open,
see,
obviously,
three,
obviously
that
were
using
unseeable
to
kind
of
provision
that
to
gateways
that
were
running
rail
that
were
not
using
we're
not
using
open
sea
for
anything
coronaries.
So
I
remember
that
dog
I
know
it's
in
opposite
comment.
That's
another
way!
You
can
look
at
a
B.
B
A
Wow
we're
having
another
gathering
in
Milan
on
the
October,
no
September
18th.
So
if
you
can
take
a
look
and
see
if
you
can
figure
out
which
one
that
was
I'm
trying
to
think
because
I
know
I'll
look
at
the
schedule
too,
but
we'll
dig
it
up
and
in
the
blog
post
that
will
go
with
this
video.
So
that
would
so
people
can
reference
that,
possibly
if
it
did
a
good
job
I'll
make
them
do
it
again
and
give
us
an
update
at
the
Milan
event
on
the
18th
of
September.
A
All
right
are
there
any
other
questions
folks.
This
was
really
good.
I
apologized,
a
little
bit
of
feedback,
blue
jeans
kind
of
died
on
me
a
few
times,
so
I
came
back
into
the
back
door,
but
it
should
be
annoyed
now,
edit.
Those
bits
out
as
well
as
elaborating
acknowledge,
are
the
blue
jeans
and
post
this
video,
probably
tomorrow,
because
I'm
at
a
conference
right
now
and
don't
have
upload
and
download
speeds
that
are
conducive
to
putting
things
on
YouTube
and
posting
blog
posts.
A
But
this
is
really
great
content
and
if
you
have
follow-ups
to
this
or
demos
that
you
want
to
share
with
the
community,
please
let
us
know:
we've
got
some
great
Quercus
content
already
and
some
k
native
stuff
already
on
the
common
stuff,
but
updates
are
always
welcome
and
as
well
as
work.
If
you
get
someone
who's
willing
to
share
their
use
cases
and
their
implementation,
that
would
be
wonderful
as
well.
B
B
A
Awesome
work
guys
and
thanks
for
doing
this,
we'll
really
appreciate
you
taking
the
time
out
today.
I
don't
see
any
further
questions.
Michael
did
you
Michael
Hanson
has
been
actively
thanked,
talking
on
chat,
so
I'm
just
giving
him
an
opportunity
to
weigh
in
one
more
time
if
he
has
any
more
questions.
A
D
B
A
Thank
you
all
right,
guys,
I'm
gonna!
Let
you
go
back
and
give
you
ten
minutes
back
of
your
days
and
trust
me
Michael
when
you
get
back
from
China
I
want
to
hear
what
you
learned
about
what
they're
doing
there
and
see
if
we
can
further
the
conversation,
because
the
China
Mobile
people
did
a
great
demo
in
Barcelona.
I
was
pretty
impressed,
but
it
went
really
deep,
so
maybe
showcasing
some
of
it.
I
can.