►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
the
seth
tech
talk
mike
perez
yourself,
community
manager
and
I
have
the
delight
by
red
hat
emerging
technologies,
giving
us
a
couple
of
interns,
jason
and
nia,
who
have
been
working
on
a
project
related
with
steph.
It
is
called
edge,
application,
streaming,
multi,
video
sources
and
it's
their
summit
summer,
intern
project,
and
so
I'm
really
looking
forward
to
hearing
from
these
two
on
what
they've
been
working
on
so
jason
and
nia.
Will
you
please
take
it
away.
B
Yeah
hi
everyone,
my
name
is
jason.
Wang.
B
B
So
video
streaming
was
a
topic
we
saw
that
could
be
incorporated
into
emerging
technology
so,
as
there
are
billions
and
billions
of
cameras
in
the
world
doing
everything
from
security
to
simple
video
chatting,
there's
always
a
need
for
improvement,
whether
it's
quality
or
latency,
wise
and
then
so
our
sum
our
this
summer.
Our
goal
was
to
work
on
one
such
use
case.
C
So
what
we'll
be
discussing
today
is
the
use
case.
We
have
picked
the
architecture
we
have
designed
to
solve
the
issue,
we'll
be
speaking
about
our
components
in
the
architecture
and
we'll
show
a
demo
video
of
our
progress
next
slide
yeah.
So
why
move
to
edge?
Although
core
networks
and
the
centralized
cloud
architecture
already
support
the
use
of
smart
devices
to
an
extent,
they
will
be
unable
to
handle
vast
amounts
of
data
that
will
be
created
at
the
edge
by
over,
like
75
billion
connected
devices
that
are
predicted
to
be
active
by
2025.
C
We
are
bringing
them
to
a
like,
with
the
upcoming
5g
architectures
and
technologies,
trying
to
process
them
at
the
edge
sort
of
pushes
the
speed
increases
it.
The
latency
and
all
these
issues
are
handled
that
way.
So
there
is
the
need
for
more
speed.
The
gravity
of
the
data
and
the
computation
has
shifted
from
core
to
edge
as
a
result
of
technologies
like
internet
of
everything,
ai
ml
cloud,
gaming
hd
streaming
vr.
C
So
we're
gonna
talk
about
our
architecture
that
we
have
designed
to
go
through
it
briefly.
We
have
youtube
and,
like
our
video
stores
can
be
from
youtube
or
webcam
or
any
such
sources.
These
are
streamed
using
g
streamer.
We
have
built
a
g
streamer
source
g,
streamer
plugin.
We
have,
we
are
already
using
a
source,
plugin
soup,
http
src,
that's
already
available
to
stream
from
our
live
source.
C
We've
built
the
sync
plugin,
so
our
sync
plugin
basically
uses
s3
api
to
upload
video
to
ceph
from
its
buffer
live,
and
it
goes
to
ceph
and
we're
using
the
multi-part
approach
here.
So
once
all
the
parts
have
been
uploaded
to
ceph
we
we
have
set
up
self-bucket
notifications.
C
So
once
the
notifications
come
up,
the
k
native
functions
get
triggered
and
the
k
native
functions
here
performs
the
necessary
and
required
like
here.
In
our
case,
we
are
using
opencv
to
do
video
stitching,
so
such
perf,
such
analytics
or
anything
based
on
our
requirement
can
be
done
next
slide.
B
So,
just
a
little
bit
more
about
our
g
streamer
plug-in
and
what
g
streamer
is
so
the
goal
for
a
plug-in
was
to
upload
videos
to
stuff
with
the
g
streamer
pipeline
and
basically
g
streamer
is
a
framework
for
creating
streaming
media
applications
and
g
streamer
makes
it
very
easy
to
write
any
type
of
streaming
multimedia
application.
B
So,
although
g
streamer
had
many
useful
plugins,
there's
no
sync
element
into
ceph,
so,
like
neha,
said,
the
source
element
that
we
used
was
already
provided
for
us,
which
was
soup,
http,
source
plugin,
and
basically
that
plugin
is
able
to
read
data
from
a
remote
location
given
a
url
and
then
the
sync
element
is
what
we
created
a
custom.
B
Rgw
sync
in
python,
using
gstreamer,
python,
bindings
and
multipart
upload
is
how
we
uploaded
to
def
object,
storage
and
basically,
we
used
multipart
upload
instead
of
single
part
upload,
because
we
know
that
objects
are
limited
in
ceph
and
multi-part
upload
also
limits
the
number
of
objects
being
uploaded.
So
that
was
one
reason
and
then
to
talk
a
little
bit
more
about
the
properties
of
g
streamer.
So
g
properties
are
really
important
because
they
control
how
the
element
behaves
in
our
sync.
B
B
So
for
our
project,
the
end
goal
was
to
be
able
to
stream
live
video,
so
we
needed
a
way
to
tell
the
plug-in
to
stop
running
and
complete
the
multi-part
upload,
which
is
why
we
have
a
limit
size
so
once
that
size
is
reached,
multipart
will
complete
its
last
upload
and
finish.
B
So
we
had
some
issues:
running
g
streamer
on
fedora
and
basically
we
were
able
to
write
the
code,
but
we
couldn't
find
there
was
no
folder
for
the
plugin
for
us
to
export
to
so
the
way
we
worked
around
this
was
we
created
a
container,
so
we
found
out
that
gstreamer
ran
better
on
ubuntu,
so
that
was
our
base.
C
So
not
just
making
it
like
a
docker
image.
We
have
also
set
up
a
package
which
can
be
installed
using
pip
install,
so
there
are
some
export
conditions
that
have
to
be
run
after
installing
our
python
package,
but
once
you
do
this
in
your
local
system,
be
it
on
any
os.
You
can
use
the
g
streamer
next
slide.
C
Yeah
so
now,
once
the
multipart
upload
has
been
done,
we
set
up
bucket
notifications
and
once
either
a
bucket
is
created
or
the
bucket
has
been
deleted.
Such
events
triggered
the
notifications
and
once
the
notifications
comes
up,
the
k
native
function
is
triggered
and,
as
I
said,
any
analytics
can
be
any
analytics
or
any
features
can
be
set
up
there.
In
our
case,
we
are
using
opencv
to
stitch
functions
because
that
seems
related
or
connected
to
our
streaming
source.
B
So
we
use
opencv
to
stitch
our
videos
together
and
basically
we
would
capture
the
videos
with
a
video
capture
and
then
we
would
read
the
frames,
so
the
techniques
we
use
with
opencv
is
key
point
detection,
which
basically
maps
the
points
in
one
image,
the
corresponding
points
in
the
other
image
and
for
us
we
need
at
least
four
matches
and
we
also
used
a
fixed
homograph
matrix.
B
So
this
meant
that
the
key
point,
detection
and
the
future
matching
is
done
once
at
the
start
and
then
the
key
points
are
used
for
the
subsequent
video
frames.
So
this
will
reduce
computational
cost.
But
the
only
really
important
thing
to
know
is
that
the
video
must
be
fixed
so
or
the
camera
that
you're
shooting
has
to
be
still
so
here.
It
is.
C
So
at
the
beginning
we
set
up
our
go
to
client
and
we
create
that.
We
start
the
multi-part
upload.
Then,
because
this
is
live
streaming
we
get
in
buffer
by
buffer.
Once
our
minimum
path
size
limit
has
been
reached,
we
call
the
handle
part
function.
Handle
part
function
is
where
part
by
part
of
the
live
streamed.
Data
is
uploaded
and,
as
a
result,
an
e
tag
of
that
particular
upload
part
is
stored.
C
Once
all
the
parts
have
been
uploaded,
the
last
the
minimum
path
size
is
actually
5
mb,
but
in
this
case
the
last
part
can
be
of
any
size
and
once
all
the
parts
have
been
uploaded,
we
do
the
complete
multipart
upload
to
finish
the
process
and
because
we
have
multiple
sources
we're
streaming
from
multiple
sources.
We
have
written
a
pipeline
function
where
we,
as
one
time
say,
give
our
self
credentials
and
we
call
a
function
where
we
give
our
command
line.
C
The
g
streamer
basic
command
line,
as
you
can
see
the
swoop
http
src
we're
streaming
live,
that's
our
source
and
there's
our
sync
frgw,
sync
plugin.
We
keep
giving
our
individual
sources
and
bucket
names
for
those
sources
and
we
call
the
pipelines.
C
So
that's
the
basic
of
our
code
and
we
have
packed
the
whole
thing.
We
have
containerized
the
whole
thing.
The
ubuntu
is
our
basic
setup
where
all
the
requirements
for
gstreamer
are
already
downloaded
and
set
up.
C
C
C
So
if
you're
doing
this
multiple
times
it
becomes
easier
and
the
default
size
is
5mb
here,
I've
just
given
it
6
million
bytes,
and
because
this
is
live
stream,
we
have
kept
60
million
bytes
as
the
limit
to
stop
the
upload.
So
intuitively
you
will
have
like
10
parts,
I'm
giving
for
this
demo
purpose
we're
giving
two
videos
right
now.
So
I'm
giving
my
first
video
link
and
I've
defined
the
bucket
name
as
neha
right.
C
C
C
Actually,
when
this
has
been
implemented
in
real
life,
we'll
be
using
threading
process,
because
this
all
happens
parallelly,
but
for
much
more
clearer
understanding.
We
are
giving
it
this
part
by
part
here
once
it's
done,
you
just
give
done.
B
So
here
I'm
just
going
to
show
you
how
our
stitching
function
works
so,
like
I
said
before,
an
important
part
when
we're
shooting
the
video
is
that
the
camera
needs
to
be
fixed,
and
so,
when
I
was
shooting
my
own
videos
to
test,
my
camera
was
very
shaky
because
I
was
just
filming
on
my
hand,
so
basically
to
find
a
better
video
to
show
this
demo.
B
We
cropped
a
video
into
two
sections,
which
is
what
you're
seeing
here
and
then
the
reason
why
we
recorded
this
on
mac
was
because
the
fedora
ui
was
unable
to
play
back
our
video
so
and
then
on
mac
we're
able
to
stream
it
fine.
B
So
the
left
and
right
are
the
individual
videos
that
you
saw
neha
upload
and
then
from
there
it's
uploaded
to
ceph
and
then
our
stitching
function
takes
those
videos
from
ceph
and
stitches
them
together.
And
that's
what
you
see
in
the
result.
C
As
you
can
see,
this
process
or
this
architecture
could
simply
have
kafka
in
it,
but
the
reason
we
have
chosen
steph
is
ceph
is
one
of
the
most
reliable
and
stable
storage
platforms
out
there,
and
so,
when
we're
talking
about
live
streaming
and
there's
a
lot
of
data
like
high
definition,
data
and
few
gbs
of
data
that
gets
recorded
during
these
live
streamings.
That
has
the
ability
to
handle
the
amount
and
also
the
types
of.
B
C
So
for
future
work,
as
so,
once
the
360
video
stitched,
you
can
see
we
re-uploaded
back
into
ceph
and
as
an
extension
of
the
current
work,
we
can
create
an
rgw
nfs
gateway
to
export
the
360
degree,
stitched
view
export
it
to
nfs
share
and
then
do
rtmp
video
streaming
from
there.
C
B
And
we'd
also
like
to
thank
mike
and
the
ceph
team
for
allowing
us
to
show
our
demo
in
our
project
here.
So
thank
you
very
much.
A
Oh
thank
you
both
for
presenting
this
was
a
great
overview.
I
think
I
was
really
impressed
with
the
station
work
myself
so
audience
do
we
have
any
questions.
A
A
A
C
Actually,
the
bucket
notification
we've
had
an
implementation
issue
here,
but
it's
like
so
when
the
bucket
either
gets
created
like
we'll,
be
creating
the
topic,
storage
and
we'll
be
creating
the
bucket
names.
So
if
there
are
multiple
sources,
probably
will
name
it.
Something
like
bucket
one,
two
three
or
something
like
that.
So
once
we'll
automate
the
process
by
in
case
this
particular
bucket
is
created,
will
probably
trigger
a
notification
so
once
and
we
set
the
http
port.
So
once
the
notification
has
been
created,
it
will
trigger
the
k.
C
Native
function
and
k
native
analytics
will
be
containerizing
the
whole
process,
so
it
will
run
the
analytics.
So
basically,
here
it
will
once
the
trigger
notification
is
triggered,
it
will
run
the
opencv
function
where
the
that
particular
bucket
content
is
downloaded,
and
then
this
they're
stitched
together.
A
Oh
go
ahead.
Sorry.
C
A
Somebody
was
asking
and
chat
the
url
to
the
github
project.
Oh.
D
A
All
right,
that's
a
lot
of
awkward
silence,
so
all
right,
I
want
to
thank
jason
aniha
for
going
ahead
and
providing
us
this
great
content
on
streaming.
Multiple
video
sources
stuff
very
awesome.
A
I
I
also
understand,
too,
that
this
is
the
end
of
the
of
your
program,
but
I
hope
to
see
you
in
the
community
and
join
us
in
future
projects.
A
Yeah
all
right
thanks
everybody
for
joining
us
and
we
have
another
tech
talk
happening
august
28th.
I
believe
I
keep
saying
1700
utc.
A
That
would
be
the
secure
token
service
layer
on
on
top
of
a
real
freightos
gateway.
So
come
join
us
for
that
and
again,
thank
you,
jason
neha
for
joining
us
and
we'll
see
you
all
next
time.