►
From YouTube: Real Time Pipeline Monitoring for the Energy Sector
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Hello
and
thank
you
for
joining
us
today
for
the
webinar
on
real-time
pipeline
monitoring
for
the
energy
sector,
I'm
Grant
Swanson,
your
host
for
the
session
and
I'd
like
to
introduce
James,
Stewart
Bridges,
head
of
product
for
Clarion,
Ben,
Cleary
CTO
for
Clarion
and
Alex
michaleb,
Senior
Solutions
architect
for
Infineon.
Welcome.
B
Hello
good
morning,
good
afternoon,
so
this
is
James
Stewart
Bruce
speaking
as
Grant
said,
I
lead
the
the
product
at
Clarion.
Clarion
is
a
British
company,
a
young
company
based
in
the
UK,
and
we
do
both
data
analytics
and
we
do
development
of
electronics
and
Hardware.
That
goes
with
it,
and
so
what
are
we
doing?
B
What
our
main
focus
is
around
pipeline
infrastructure,
so
pipelines,
energy
pipelines,
oil
and
gas
pipelines,
also
water
pipelines
and
our
mission
is
to
optimize
those
that
infrastructure,
those
pipelines,
and
so
there
are
all
sorts
of
different
ways
in
which
this
is
relevant.
For
example,
you
can
think
about
pumps
or
compressors
driving
the
fluids
through
Pipelines
the
amount
of
power
that
those
pumps
or
compressors
use
really
needs
to
be
optimized,
because
that
makes
a
big
difference.
Just
a
small,
a
small
Improvement
in
optimization
can
make
a
huge
difference.
B
B
We
can
do
predictive
maintenance
and
look
at
how
the
equipment
is
running
and
work
out
when
it,
when
is
the
best
time
to
do
your
maintenance
on
your
equipment
with
analytics
that
will
see,
problems
are
are
likely
to
happen
before
they
actually
happen,
so
really
optimizing
that
that
time
between
maintenance,
so
that
you
don't
do
it
too
early,
and
you
don't
do
it
too
late.
B
We
also
can
support
geo-hazards.
So
around
a
pipeline,
you
have
a
lot
of
Hazards
from
from
hillsides
or
landslides
or
seismic
activity
or
floods.
These
kind
of
things,
another
area
where
our
software
data
analytics
and
Hardware
can
can
really
support.
So
what
do
we
have?
We?
We
have
developed
some
really
really
great
software,
which
is
designed
from
the
bottom
up
to
be
really
secure
with
the
latest
standards,
which
Ben
will
talk
about
in
a
little
a
little
bit
more
detail
later
on.
B
Probably,
but
the
the
great
thing
about
the
system
is
it's,
it's
very
modular
and
it's
scalable,
and
what
sets
us
apart
from
other
companies
that
do
this
type
of
thing,
is
that
we
don't
just
do
software
and
data
analytics.
We
also
provide
the
hardware
we
provide
our
own
Edge
device,
which
we
design
ourselves.
You
can
see
on
the
right
hand
side.
B
You
know
this
is
a
device
which
can
go
out
into
the
field,
self-powered
work
in
a
in
a
remote
location
and
do
data
analytics
in
the
field
in
a
remote
location,
so
that
you
don't
have
to
send
back
very
large
volumes
of
data
over
the
over
a
satellite
link
or
over
a
low
bandwidth
connection.
B
We
integrate
data
from
third-party
sources
as
well.
So
it's
not
just
data
that
we
capture
from
our
own
sensors
from
our
own
Hardware.
It
can
be
data
that
is
already
available
in
the
system
from
sensors
that
currently
exists.
It
might
be
sensors
that
are
already
connected
to
a
scada
system.
All
of
that
kind
of
data
can
be
ingested,
it
might
be
data
which
is
manually
collected
and
ingested,
or
it
might
be
data.
B
That's
a
third-party
data
source,
such
as
what
the
commodity
price
is
today
or
what
the
weather
forecast
is
going
to
be
and
how
the
temperature
is
likely
to
go
up
and
down.
All
of
these
different
sort
of
siled
data
sets
when
you
combine
them,
pull
them
together.
That's
where
we
can
really
add
value
and
provide
some
meaningful
insights,
not
just
about
what's
happening
in
real
time
here
and
now
and
monitoring
real
time,
but
also
what
was
the
past?
What
was
the
history?
B
Foreign,
so
that's
enough
on
on
the
sort
of
high
level
overview.
If
we
start
to
think
about
how
does
this
all
work,
then
what
we
have
on
the
right
hand,
side,
there
is
a
workflow
where
you
start
by
capturing
data.
You
record
data
as
soon
as
you've
got
data
into
a
system
like
this
one
of
the
biggest
challenges
is
cleaning
up
that
data.
B
But
when
you
understand
it,
look
at
it
in
more
detail
and
in
aggregate
that
can
be
written,
really
meaningful
insights
there.
So
cleaning
data
is
is
an
important
and
big
part
of
of
that,
and
then
we
go
through
the
workflow
of
you
know
doing
some
some
Advanced
analysis,
which
then
we'll
talk
about
a
bit
more
later
on,
but
one
of
the
things
that's
really
really
key
to
recognize
there
is
that
we're
we're
not
just
doing
simple
analysis
in
real
time.
B
We're
also
able
to
do
some
machine
learning,
some
artificial
intelligence
and
some
sophisticated
data
processing
in
real
time
and
on
the
fly
as
the
data
comes
in,
which
is
different
from
most
most
systems
and
a
really
powerful
way
of
being
able
to
process
manipulate,
understand
and
get
value
from
data,
and
so
the
workflow
goes
on.
We
reveal
some
insights.
We
provide
the
the
visualizations,
the
communication,
the
meaning
in
that
data,
with
the
objective
of
being
able
to
understand
how
better
to
manage
things
in
future.
B
Take
actions,
learn
from
it,
optimize
things
and
then
go
back
close
the
loop
going
back
to
to
being
able
to
keep
looking
and
continuously
improve,
and
so
with
that,
let
me
hand
over
to
Ben
to
go
into
some
of
the
more
technical
detail
on
that.
C
Thanks
James
hi
everybody
I'm
Ben,
I'm,
the
head
of
Technology.
My
role
is
obviously
kind
of
building
out
the
kind
of
software
sides
and
data
ingestion
and
I
will
be
continuing
on
from
here
has,
as
James
has
mentioned,
we
we
ingest
her
data
from
multiple
kind
of
places.
We
have
our
primary
proprietary,
Edge
controller.
We
have
our
external
apis
as
James
gonna
mentioned.
This
could
be
commodity,
pricing
or
anything
else,
and
we
have
also
then
third-party
data
Integrations
too.
C
This
could
be
existing
kind
of
scada
or
field
Ops
or
control
room
or
incident
kind
of
management
tools.
Now
our
existing
architecture
looked
a
lot
like
this.
We
had
we
were
using
kind
of
a
rabbit
mq
which
were
able
to
kind
of
feed
in
a
number
of
services,
so
each
one
of
the
services
on
the
left
would
have
to
be
kind
of
written,
then
maintained
and,
as
I'm
sure,
you're.
All
aware.
This
becomes
quite
hard
to
kind
of
manage,
because
at
one
point
you
don't
just
want
to
ingest
the
data.
C
C
C
The
service
deployments
were
not
a
simple
task.
There
were
lots
of
them.
We
had
to
have
lots
of
supporting
tools.
All
of
these
then
required
that
all
that
can
occur,
logging
and
monitoring
and
maintenance,
and
again
third-party
data
Integrations,
often
under
undocumented
apis
data
formats,
that
weren't
the
the
greatest
and
they
required
a
lot
of
cleaning.
C
We
also
found
that
this
approach
was
not
as
flexible
as
we.
We
first
thought
a
data
team
spent
probably
about
90
percent
of
their
time,
having
to
clean
the
data
before
they
could
actually
get
to
work
on
it,
and
this
is
where
we
were
introduced
to
Ian
finium
last
year,
and
we
began
to
look
at
that.
Open
source
kind
of
a
product
and
Alex
I,
don't
know
if
you
want
to
say
a
few
words
on
our
on
this
particular
bit.
D
Thank
you
Ben,
so
infinian
is
the
company
behind
the
follow
your
open
source
data
streaming
platform
so,
and
we
also
provide
managed
Services
as
a
part
of
the
Infinium
Cloud
it's
so
our
data
streaming
platform
is
built
from
ground
up
using
rust,
which
provides
low,
latency
and
high
performance
and
programmable
guarantees.
D
We
also
have
a
smart
modulus
which
allow
you
to
deploy
data
Transformations
and
data
cleaning
into
streams,
which
removes
the
need
of
moving
data
in
and
out
of
a
stream
or
message
queue
like
in
previous
example,
given
by
then
we
also
provide
the
immutable
store.
So
it's
not
a
queue.
It's
a
stream,
so
you
can
read
data
from
there
after
retention
period
and
we
provide
a
client
native
API
for
Java,
JavaScript,
rust
and
node.js
clients,
and
we
also
as
a
company
in
Union.
We
support
our
developers.
D
C
Thank
you
Alex,
so
so
yeah.
So
once
we
had
been
in
introduced,
we
began
work
on
adopting
this
and
the
architecture
that
we
now
have
curve
kind
of
Flows
In.
In
this
way
we
have
a
series
of
connectors.
These
could
be
mqtt
HTTP,
and
these
are
also
managed
inside
the
Infinity
on
cloud
service
that
we
use.
All
of
these
then
allow
us
to
to
write
and
configure
data
to
be
transformed
and
streamed
in,
and
we
have
an
example
kind
of
a
pipeline
here
where
we
will
serialize
to
kind
of
Json.
C
We
may
run
some
change,
Point
detection,
and
then
we
also
want
to
store
that
that
the
data
there
you
you
may
be
asking
what
differences
between
the
the
previous
architecture
and
this
one
isn't.
That
is
that
this
is
all
kind
of
managed
inside
of
of
the
in
in
finian
Cloud.
So
our
team
just
get
around
to
writing
at
the
the
data
analysis.
C
So
so
the
the
real
keys
for
us
was.
It
was
a
simplified
architecture.
There
was
an
efficiency
in
Improvement
data
team,
can
add,
connectors
smart
modules
and
build
out
the
pipelines
that
they
want
without
really
waiting
on
the
the
dev
teams
and
it's
it's
extremely
extensible
and
we'll
jump
into
that
in
just
a
a
minute.
C
The
the
enlightened
execution
of
code,
or
these
kind
of
modules,
is
something
that
that
really
kind
of
kind
of
Drew
Us
in
and
you'll
build
kind
of,
explain
that
there
we
we
use
it
for
our
data
ingestion
side.
So
this
is
from
an
edge
all
the
way
to
an
existing
kind
of
scada.
We
use
it
all
also
for
real-time
data
visualization,
and
we
also
use
it
for
our
STL
workflow.
Our
data
team
has
moved
away
from
the
traditional
ETL
workflow
and
we're
able
to
just
tap
into
streams
of
data.
C
To
begin
the
analysis.
C
We
also
are
beginning
to
explore
the
ability
to
apply
kind
of
AI
and
ml
in
Stream
So,
the
ability
to
to,
as
James
gonna
mentioned,
to
perform
that
in
real
time
it's
been
a
bit
of
a
game
changer
for
for
us
now,
I'm
just
going
to
just
jump
into
another
stream
and
we'll
run
through
a
demo
of
where
it
sits
and
how
it
works
a
second.
C
So
so,
when
you,
our
platform
is,
is,
is
an
overview
of
the
the
low
low
location.
In
this
instance,
we
have
a
map
view
of
the
UK.
We
have
a
couple
of
of
edge
nodes
that
we've
configured
for
this
kind
of
demo.
C
We
have
the
ability
to
view
kind
of
tickets
and
incidents
from
the
home
screen.
What
we're
able
to
do-
and
this
is
where
part
of
the
VM
clouds
in
you-
can
actually
stream
kind
of
data
straight
to
the
the
the
map,
so
our
kind
of
Ops
teams
are
able
to
to
get
a
snapshot
of
how
things
are
performing
in
the
the
field.
C
Now
this
is
just
one
aspect
of
where,
where
it
sits,
the
other
aspects
of
where
we've
focused
is
on
our
dashboarding
side,
and
this
is
an
area
that
we're
quite
kind
of
proud
of
in
that
we're
able
to
feed
the
data
from
the
Infinium
Cloud
over
website
sockets
into
our
kind
of
dashboards,
and
we
can
get
get
a
real-time
feed
as
well
as
a
mix
of
historical
and
contextual
coming
of
data
which
provide
quite
a
powerful
View.
C
Now
this
is
all
good
and
fine,
but
I'm
well
aware
that
this
kind
of
of
audience
probably
works
more
with
the
data
typical
pairs.
Another
example
which
allows
us
to
walk
through
how
we
can
work
with
streams
and
data
frames.
C
C
So
it
should
already
be
installed.
It's
great
and
we're
going
to
look
at
three
ways
of
working
with
this.
We
can
work
in
a
traditional
kind
of
client-based
way,
which
is
where
we're
going
to
hook
into
the
stream
of
data
externally
work
with
the
data
and
filter
and
and
pass
it
we're
going
to
look
at
how
we
can
use
webassembly
as
part
of
this,
and
we're
also
going
to
have
a
quick
look
at
how
we
can
apply
a
webassembly
kind
of
a
module,
all
this
smart
module
to
the
actual
stream
it
itself.
C
C
Alex
mentioning
in
finian
have
a
cloud
managed
service,
and
that
is
the
service
that
that
we
we
use.
But
fluvio
is
an
open
source
kind
of
kind
of
product.
So
if
you
needed
to
run
it
kind
of
locally
internally,
then
you
can
do
as
well.
So
that's
just
kind
of
sign
into
this,
so
we
should
get
a
message
back
to
say
it's
all
kind
of
LinkedIn
fantastic.
C
Now
what
we
have
is
a
couple
of
functions
which
we've
wrote
of
which
allow
us
to
grab
some
data
from
a
topic
inside
of
fluvia
and
we'll
be
able
to
set
the
the
amount
of
messages
that
we
want
to
see.
So
in
this
instance,
we're
just
going
to
grab
data
from
the
mqtt
kind
of
a
topic
on
partition
kind
of
zero
and
we're
going
to
get
the
the
first
10
and
then
what
we're
going
to
do
is
just
print
the
the
first
one
out
so
well
that
that
should
do
that.
C
C
We
have
the
mqtt
kind
of
topic
and
a
payload
which
is
just
a
a
list
of
bytes.
So
what
we
need
to
do
we
need
to
manually
clean
and
process.
This
so
what
what
we
have
is
we
have
a
function
that
we
wrote
up,
that
will
clean
the
the
data
and
then
it
will
just
spit
this
back
out
as
a
Json
string.
If
we
run
this
here,
you
can
see
that
that
that
message
has
has
worked.
C
But
that
is
a
very
traditional
way,
and
that
was
the
way
we're
trying
to
avoid
trying
to
to
work
with
this,
and
one
of
the
the
reasons
why
in
finian
and
fluvio
were
so
attractive
to
us,
was
our
ability
to
leverage
at
risk
to
build
out
webassembly
modules.
So
I'll
show
you
what
that
looks
like
and
here's
a
snippet
of
some
some
wrist
code.
C
The
affiliant
team
have
been
quite
kind
enough
to
build
out
a
kind
of
tool
chain
that
allow
you
to
build
out
kind
of
modules
based
around
for
the
filter,
map,
filter
map
array
map
and
everything
else.
So
it
allows
you
to
go
quickly,
get
up
and
running
with
that
I
mean
to
do
everything
by
your
your
yourself.
C
What
we
have
here
is
a
basic
kind
of
a
map,
so
every
record
kind
of
record
that
goes
through
this
will
be
that
and
what
all
we
want
to
do
here
is
we
want
us
to
pass.
We
have
taken
that
kind
of
python
code,
converted
it
to
rust,
and
that
is
what
will
be
be
sent
out
where
there's
an
error.
We'd
also
get
that
kind
of
a
variety
into
the
the
queue
as
well.
C
We
jump
back
to
this.
You
can
see
that
we've
just
imported
two
extra
kind
of
of
the
Imports.
We
have
the
consumer
kind
of
config
and
then
the
smart
module
kind,
which
is
enum,
which
is
built
a
really
basic
configuration
here,
and
then
we've
we've
marked
and
modified
the
previous
function
that
allows
us
to
pass
in
this.
C
That
also
means
it's
portable
and
shareable
across
your
team
or
others,
and
it
also
means
that
it
can
actually
be
applied,
then,
at
the
edge
one
of
the
the
other
areas
that
we
were
interested
in
is
exploring
how
we
can
use
web
webassembly,
then
at
the
edge
and
our
Edge
can
controller
and
try
and
help
kind
of
streamline
the
data
kind
of
cleansing
and
processing
before
that
database.
C
The
the
cloud
again,
what
we'll
do
we'll
take
the
list
of
Records
we'll
put
these
into
a
data
frame
and
then,
as
as
you
can
see,
it
is
all
already
there.
We
this
this
particular
way
of
working
means
that
your
data
teams
don't
have
to
to
rewrite
the
same
code.
They
have
a
web
web
assembly
code,
which
is
extremely
fast,
and
it
also
means
that
that
these
security
of
your
your
code
goes
up
is
where
web
assembly
kind
of
modules
can't
can't
particularly
do
much.
C
Now
those
two
kind
of
methods
are
great,
but
they
still
require
you
to
kind
of
manage,
put
these
on
and
share
them
around
your
your
team,
one
of
the
things
that
I
I
mentioned
in
the
the
slides
kind
of
prior
is
our
ability
to
move
this
into
the
stream
and
that
that
was
the
the
biggest
self
for
us
in
that
we
can
Empower
our
data
teams
to
take
to
take
their
kind
of
cleaning
and
passing
and
basically
the
90
of
their
their
their
work
pop
it
into
a
series
of
modules
that
can
move
this
to
where
the
data
is,
and
this
is
this
particularly
looks
now,
so
we
can
configure
a
yaml
file
and
we
can
say
we
want
the
mqtt
type
Source
here
we
can
configure
the
broker,
then
what
we
can
do.
C
We
can
build
out
a
series
of
steps
here
that
will,
but
that
will
be
did
deployed
with
this
configuration.
So
every
kind
of
message
that
comes
in
on
this
kind
of
topic
will
will
run
through
these
steps,
so
for
us,
we've
used
it
to
kind
of
clean
and
pass
these
extra
bytes
that
that,
unfortunately,
will
occasionally
kind
of
creep
in
on
time
to
time,
we've
used
it
to
perform
some
threshold
analysis.
C
All
of
this
is
done
in
stream,
which
means
our
data
teams
don't
need
to
do
all
of
that.
That
kind
of
of
manual
work
now
just
to
show
you
how
that
actually
works.
If
we
literally
run
this
now
these
two
lines
of
code,
we
get
a
data
frame
back
of
data
on
there,
which
has
been
passed,
there's
obviously
still
kind
of
a
cleanup
where
there's
kind
of
not
not
numbers
and
various
things,
but
in
the
general
gist
of
being
able
to
get
your
data
team
there.
C
This
particular
way
is
being
a
powerful
feature
for
for
us
this.
You
you,
you
can
also,
then
publish
this
onto
What
in
in
finian,
call
their
smart
module
hub.
So
that
means
all
of
your
team
can
can
pull
it
from
one
place,
so
they
don't
need
to
kind
of
share
kind
of
a
repos
or
binaries
or
anything
else.
It
becomes
quite
a
powerful
tool
that
that
enables
teams
to
work
faster.
C
So
that's
where
that's
where
it
will
come
back
and
I
guess
Grant
back
over
to
you.
A
Excellent,
thank
you
so
much
ben
that
was
very
insightful
presentation
there
and
I
had
messaged.
We
have
a
couple
of
questions
coming
in
the
first
one's
asking.
Did
you
have
any
rust
developers
prior
to
using
fluvio.
C
No
one
of
my
my
first
experience
of
rust
was
actually
creating
a
pull
request
on
the
python
SDK
I
was
was
really
interested
in
using
it
and
I
I
wanted
to
have
to
have
a
go
at
learning
again.
It
seemed
like
a
good
time
to
well
in
a
good
a
good
place.
So
no
you,
you
probably
don't,
need
to
actually
know
too
much
risk
to
do
what
you
want
to
do.
It
obviously
helps,
but
it's
also
relatively
it's
a
good
skill
to
learn.
I
would
say.
A
Excellent
and
anyone
from
the
audience
feel
free
to
continue
to
put
questions
into
the
question
window.
We
have
quite
a
few
more
here.
The
next
one
is
where
you
talked
about
data
security.
How
do
you
ensure
data
security
in
the
platform.
C
So
from
The
Edge,
all
the
way
up
to
the
cloud
everything
is
a
is
a
encrypted
in
transit.
We
also
kind
of
Leverage
quite
sophisticated
security
at
the
Edge,
by
leveraging
TPM
kind
of
kind
of
modules
and
and
various
things
we
also
could
have
kind
of
ensure
that
data
is
encrypted
at
rest,
and
you
know
we.
We
ensure
that
that
all
of
our
kind
of
of
vendors
that
that
we
use
such
as
a
Infinium,
you
know
they
take
security
just
as
as
seriously
as
we
do
so.
A
C
The
the
real
kind
of
difference
is
that
we
have
the
ability
to
go,
go
full
and
two
two
two
end,
so
we
have
Edge
controller,
which
is
about
the
size
of
a
Raspberry
Pi
kind
of
thing
that
can
be
plugged
in
at
the
field.
So
where
there
are
data
gaps,
we
don't
just
we
don't
just
say
we
can't
get
that
data,
but
actually
we
we
have
a
an
answer.
We
can
compare
Edge
controller
in
plug
in
this,
the
the
various
bits
of
of
our
kit.
This
could
be
a
a
sensor.
C
It
could
be
a
pump,
it
could
be
anything
and
we
we
get.
We
get
that
data
and
securely
then
transmit
that
up
into
the
kind
of
cloud
I'd
say.
One
of
the
other
kind
of
differences
that
we
have
is
is
actually
the
infinity
sounds
a
bit
kind
of
of
tongue-in-cheek,
but
actually
being
able
to
do
a
lot
of
the
cleaning
and
processing
in
stream
means
that
our
data
team-
it
is
relatively
small,
but
we're
actually
able
to
do
a
lot
of
processing
Again
by
leveraging
those
kind
of
modules
and
workflows.
B
No
I
think
I
think
those
are
the
those
are
the
main
ones
you
you
covered
there.
It's
the
ability
to
to
to
perform
the
data
analytics.
B
B
So,
for
example,
if
the
data
that
we're
trying
to
require
acquire
is
a
high,
highly
dense
amount
of
data,
for
example,
if
there's
a
bit
of
rotating
Machinery
a
pump
or
a
compressor-
and
you
want
to
know-
you
know-
what's
happening
with
the
vibration
on
that.
Well,
well,
you
need
to
sample
that
vibration
data,
perhaps
from
an
electronic
accelerometer.
You
need
to
sample
that
data
fast
and
it's
a
lot
of
a
lot
of
data
that's
being
acquired.
B
It
just
doesn't
make
sense
to
send
all
of
that
data
over
a
a
Communications
medium.
You
know,
via
satellite
link
or
a
4G
or
whatever
over.
The
Internet
doesn't
make
sense
so
being
able
to
process
that
data
locally
apply.
Some
insights
up
and
be
able
to
then
update
and
upgrade
and
improve
that
analysis
in
the
edge
and
transmit
the
the
intelligent
version
of
it.
B
The
summarized
version
of
it,
for
example
the
frequency
spectrum
that
that's
powerful
and
and
that's
that
that's
something
different
from
from
most
of
the
companies
that
are
doing
things
with
digital
transformation
and
data
analytics.
A
Excellent,
thank
you.
So
the
next
question
I
know
the
the
demo
that
you
showed
with
the
map
and
the
dashboards.
A
C
I
guess
James.
Would
you
like
to
take
that
that
one.
B
So
so
we
we
do
and
I
I
can't
say
who
it
is,
but
we
do.
We
have
some
of
our
equipment
on
a
pipeline
monitoring
flow
and
it
is
feeding
into
an
airport
that
particular
pipeline
that
I'm
thinking
of
is
doing
real
time.
Analysis
of
the
data
that
goes
by
in
order
to
automatically
identify
the
volumes
and
the
batches
that
are
going
through.
So
it
generates
a
report
which,
which
tells
you
okay.
This
is
the
amount
of
flow.
B
This
is
the
volume
and
then
this
is
the
price,
the
value
of
the
commodity,
and
it
does
this
all
automatically
monitoring.
What
exactly?
What's?
What's
going
on
in
that
we're?
Also
working
with
another
organization
where
the
pipeline
is
supplying
different
product
types
and
we
are
supporting
efficiency
gain
improvements
on
that
pipeline.
B
So,
in
other
words,
how
do
we
make
sure
that
the
pipeline
is
running
at
the
most
efficient
manner,
looking
at
pump
operations
flow
rates
and
what
you
can
do
to
optimize
that
this
is
particularly
relevant
in
the
context
of
electricity
prices
that
are
variable
and
going
High,
because
these
are
electric
pumps
and
when,
when
you
pump
and
how
much
you
pump
and
how
much
power
you
put
in
your
pumps
is
all
very
relevant
to
to
that.
B
So
there
are
a
couple:
I
can
I
can
mention,
but
but
yes,
we're
also
working
with
a
a
water
company
with
a
similar
type
of
a
similar
type
of
problem
is
the
first
step
of
there,
which
is
looking
again
at
pump,
efficiency
and
optimization
of
energy.
Going
into
that.
A
Excellent
and
for
the
the
person
who
asked
that
question
we
can
connect
offline
in
a
meeting
and
get
more
detailed
into
those
projects
so
we'll
reach
out
to
you
shortly
after
the
webinar.
A
The
next
question
that
just
came
in
is
have
you
reached
a
limit
yet
on
how
many
different
Edge
sensors?
You
can
pull
into
your
platform.
C
No,
no
we've
we've
performed
some
virtualized
kind
of
go
testing
with
ridiculous
amounts,
I
have
to
say
the
Infinium
clouds
has
handled
it
all
like
a
champ.
It's
got
through
it,
no
kind
of
problems,
so
we're
really
kind
of
Happy.
Those.
The
one
thing
we
did
have
to
do
was
move
our
mqtt
kind
of
broke
it
to
a
clustered
environment
and
load
balance
that
to
it
and
and
ensure
that
so
we're
actually
running
to
problems
on
the
mqtt
side
before
the
actual
kind
of
of
streams.
A
Excellent,
thank
you.
So
we
have
just
a
few
more
minutes
here.
If
anyone
has
questions
continue
to
type
them
into
the
chat
window,
the
next
question
I
think
is
more
relevant
to
Alex.
Michalev
is:
can
we
can
you
write
smart
modules
in
other
languages
and
are,
do
you
have
the
ability
to
chain
transformations.
D
So,
yes,
you
can
change
Transformations
on
connectors,
that's
where
the
yaml
file,
which
been
demonstrated
have
a
keyboard
div.
After
that
you
can
apply
multiple
Transformations.
We
provide
Json
to
SQL
Transformations
and
jolt
Json
transform
just
language
Transformations,
supported
by
Infinium,
and
there
is
a
work
on
the
way
of
being
able
to
write
smart
monitors
in
Python,
but
currently
the
only
supported
language
for
smartphone.
With
this
rust.
A
Excellent
looks
like
we
have
one
more
question
here:
give
me
just
a
moment
so
as
I
think
this
one's
more
relevant
to
you
again:
Alex
Can
Can,
you
republish
topics
from
inside
a
smart
module.
D
Aaron
Flynn,
no,
so
a
smartphone.
That
is
so.
The
answer
to
that
question
is
actually
complicated,
not
the
way
how
it's
described.
You
can't
directly
republish
into
two
separate
topics,
but
there
is
a
option
of
creating
different
types
of
records
and
Publishing
it
into
the
same
Stream
So.
The
stream
isn't
the
queue,
so
it
can
contain
multiple
types
of
Records,
so
I
think
whoever
is
interested
in
discussing
what
is
the
best
way
of
architecture
and
data
around
streams.
I'm
quite
happy
to
support
that
conversation.