►
From YouTube: Federator.ai: AIOps for OpenShift in MultiCloud Brian Jeng Prophetstor OpenShift Commons AIOps SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I'm,
Brian
I'm,
an
SI
over
at
profits,
store
and
I'm
gonna,
be
talking
about
Fed
Raider
AI,
which
is
an
AI
op
solution
for
OpenShift
and
multi-cloud
environments.
Just
to
get
you
guys
a
little
bit
of
background
on
the
multi-cloud
market,
it's
growing
really
quickly,
right,
more
more
and
more
businesses
are
shifting
their
IT
infrastructure
to
the
cloud
and
it's
projected
to
go
from
20
percent
to
40
percent
by
the
end
of
2019,
and
it's
already
over
a
trillion
US
dollars.
A
You'll
have
to
do
a
lot
of
research
on
your
own
and
what
we
ate
was
so
just
for
day,
one
you're
at
a
roadblock
right
and
then
for
day
two,
even
after
you
deploy
your
application
into
the
club,
you'll
have
to
regularly
monitor
and
manage
the
resource
usage
based
on
just
just
a
safeguard
against,
in
effect,
inefficiently.
Utilizing
your
cloud
instances.
If
you
over
provision,
you
waste
a
lot
of
money.
If
you
under
provision,
you
have
a
lot
of
application
issues
right.
A
So
that's
where
we
come
in
federated
a
I.
We
ran
a
ops
solution
that
simplifies
the
cost
optimization
process
for
both
day,
one
and
day
two
operations
of
multi
cloud
environments.
So
for
day
one
just
give
us
the
application
and
optimization
policy
and
we'll
recommend
which
cloud
service
provider
and
instance
type
you
choose.
So
we
do
all
the
legwork
for
you.
You
just
have
to
tell
us
what
you
want
to
deploy
and
then
for
day.
Two,
that's
where
our
machine
learning
AI
actually
comes
in.
A
So
here's
some
more
details
about
our
day
1
deployment,
the
user
will
notify
us
of
which
application
he
wants
to
deploy
approximately
how
many
requests
per
day
and
which
policy,
whether
it's
minimizing
costs
and
maximizing
performance,
maintaining
SLA
and
will
recommend
which
cloud
provider
instance
type
is
the
most
suitable.
And
if
the
user
wants
to
just
manually,
deploy
into
a
cloud
of
this
choosing
you
can
we
just
give
directly
the
amount
of
resources
you'll
need
just
just
straight
up
and
then
for
day.
Two.
This
is
the
where
our
machine
learning
AI
comes
in.
A
We
learn
the
future
usage
of
each
pod.
So
if
you
guys
look
at
this
graph
here,
there's
a
blue,
solid
line,
which
is
the
observed,
CPU
usage
and
then
there's
a
white
dotted
line,
which
is
the
predicted
CPU
usage,
and
you
can
see
we're
about
ten
minutes
ahead
and
the
lines
are
really
intertwined
just
showing
how
accurate
our
prediction
engine
is.
A
Okay-
and
this
is
a
study
done
by
TSL
logic
in
2017.
Is
this
kind
of
the
use
case
we're
going
after
immediately
about
84%
of
on-premise?
It
environments
were
over
provisioned
and
if
they
move
directly
to
the
cloud
with
the
direct
matching
of
their
resources,
they
would
actually
pay
more
than
they
currently
were
paying
on-premise.
But
if
they
fitted
their
clouds,
just
right
which
we're
aiming
to
do,
they
could
save
36%
in
terms
of
monetary
value
and
60%.
A
In
terms
of
resources
right-
and
let
me
just
go
over
to
it's-
not
really
a
demo-
this
is
this
is
just
the
side-by-side
comparison
of
the
native
kubernetes,
horizontal
pod
autoscaler
and
horizontal
autoscaler
using
our
federated
I.
We
found
benefits
in
these
three
main
categories.
We
can
serve
the
same
workload
using
19%,
less
replicas.
We
can
reduce
the
amount
of
CPU
over
eliminate
instances
by
61
percent
and
we
can
reduce
the
out
of
instances
by
almost
90%
and
we
have
graphs
that
show
all
this
data.
A
But
then
the
big
one
here
is
actually
out
of
memory
instances.
Basically,
every
time
an
application
hits
an
out
of
memory,
it
stalls
out
and
crashes.
So
you
want
to
avoid
these
at
all
costs
and
we
definitely
you
can
see
where
the
Green
Line.
We
only
have
two
instances
here,
whereas
the
rest
is
just
a
native
kubernetes
HBA.
We
can
reduce
these
by
almost
90%
right
and
then
just
note
that
this
is
just
the
horizontal
comparison,
but
our
federated
I
can
be
applied
to
the
vertical
Potter
scaler,
the
cluster
autoscaler
and
the
native
kubernetes
scheduler.
A
So
all
these
different
facets
of
your
open
shifts.
Buster
can
be
optimized
using
machine
learning
with
our
with
our
solution
and,
what's
really
cool
is
once
we
have
those
usage
predictions
we
can
feed
it
back
into
our
day
one
tool,
and
now
we
have
a
new
recommendation
of
which
cloud
provider
and
isn't
stipe
to
choose.
So
now
your
full
stack
from
your
resource
usage.
All
the
way
up
to
your
cloud
provider
is
fully
optimized
using
federated
I
and
I'm.
A
Just
gonna
go
over
to
operator
hub
IO
over
here,
just
to
show
you
how
simple
it
is
to
deploy
us
or
we're
already
listed
on
opera
github,
dial,
just
click
us
and
all
our
details
and
instructions
and
how
to
configure
the
custom
resources
is
over
here
how
to
install
it
and
yeah.
We
have
all
the
links
on
the
sidebar
and
that'd.
Be
that's
it.
Thank
you.