►
Description
Project Thoth is developed in the AICoE at Red Hat and it aims at providing recommendations on python software stacks, runtime environments, and deployment configurations to users through different types of integrations natural to developers: GitHub pull requests and issues, GitOps via ArgoCD, GitHub Apps .... Thoth is implemented using a microservice pattern, deployed on a number OpenShift namespaces and clusters.
A
All
right
welcome
everybody
to
another
openshift
commons
briefing.
My
name
is
beverly,
and
today
we
have
francesco
from
red,
hat's,
ai
center
of
excellence,
joining
us
to
discuss
data-driven
engineering
and
project
talk,
which
is
actually
one
of
the
projects
being
incubated
at
the
es
center
of
excellence.
A
I
believe
francesco
has
a
demo
to
showcase
later
in
the
briefing
so
without
further
ado,
I
let
him
introduce
himself
and
if
you
have
any
questions,
we'll
have
a
live
q
a
session
at
the
end
of
at
the
end,
but
in
the
meantime
feel
free
to
add
your
questions
in
the
chat
wherever
you're
watching
this
live
stream
from
and
we'll
relay
them
back
here.
So
with
that
francesco,
you
can
take
it
away.
B
Thank
you
very
much
beverly.
I
hope
you
can
hear
me
correctly
and
see
the
slides.
A
B
A
Nope
no,
oh.
B
Are
you
using
the
browser
by
chance
what
about
now
perfect?
Thank
you
very
much
beverly,
so
hi
everyone,
my
name,
is
francesco
and
I'm
part
of
the
ai
center
of
excellence,
and
I
work
on
project
toast.
I'm
a
senior
data
scientist
and
today,
what
we're
going
to
see
are
these
three
main
topics?
So
what
I
want
to
show
you
is
what
is
project
off
and
the
integration
that
we
provide
in
the
frame
of
the
github's
principles
and
all
the
way
the
developers
and
the
tools
they
use
in
the
daily
work.
B
The
second
part
will
be
focus
on.
How
do
we
basically
monitor
and
observe
a
service
that
is
deployed
on
openshift
in
multi-name
space,
and
how
do
we
define
what
what
we,
what
is
called
the
service
level
indicators,
several
third
level
objectives
that
we
define
and
the
third
part
will
be
basically
showing?
B
How
do
we
use
this
metrics
all
these
indicators
to
help
us
improve
the
system
and
improve
also
the
user
experience
for
the
service
that
we
provide,
which
is
basically
what
we
do
with
the
data
trivial
engineering
in
this
case.
But
let
me
start
introducing
project
thought
project.
Auth
is
a
open
source
project
and
you
can
find
it
on
github
under
the
top
station,
and
one
of
the
goal
of
thought
is
to
help
developers
in
the
selection
of
the
dependencies,
depending
on
the
requirements
that
they
have
just
to
give
you
some
of
the
example.
B
Let's
imagine
that
I'm
a
data
scientist
and
I
want
to
train
my
model,
and
I
should
select
some
of
the
dependency,
for
example
tensorflow,
but
I
don't
know
in
which
version
should
I
use
it
and
which
type
of
random
environment
should
I
use,
because
these
are
information
that
I
it's
not
part
of
my
domain
at
the
moment.
So
what
I
want
to
do
is
just
to
train.
B
Another
example
might
be,
for
example,
I
need
to
deploy
this
model
or
this
application
python
application
in
this
case-
and
I
want
to
know
which
is
the
software
stack-
that
does
not
contains
any
cv
or
any
vulnerabilities,
and
I
don't
want
to
have
any
of
the
libraries
that
might
contain
some
of
them,
and
this
is
a
also
another
question
that
tot
is
able
to
answer,
and
what
we
also
target
is
that
this
kind
of
questions.
B
So
if
some
user
has
some
requirements
in
terms
of
this
performance
on
security,
for
example,
there
is
a
service
that
basically
would
offload
this
work,
because
it
will
be
able
to
answer
this
question
directly.
So
the
developer
data
scientist
does
not
need
to
worry
on
that.
B
So
these
are
some
of
the
questions
or
some
of
the
way
that
project
thought
aimed
to
help
the
developers.
So,
if
you
think
of
dependency
management
nowadays,
there
are
many
types
of
release
or
main
type
of
new
project
that
are
released,
and
some
of
the
question
might
be,
for
example,
if
I
should
use
some
new
project
for
if
it's
well
maintained
or
if
it's
a
good
health
is,
if
it's
a
you
a
good
health
in
the
community.
B
So
this
kind
of
question
can
be
also
answered
by
project
thought,
but
this
is
these
are
some
of
the
way
that
we
can
help
so
some
of
them.
I
already
mentioned,
for
example,
if
I
want
to
have
an
application
deployed,
and
I
need
to
build
that
image
with
the
with
that
software
stack,
then
tot
is
able
to
verify
that
each
of
your
library
that
you
are
using
do
not
contain
any
cv
and
any
vulnerability
and
provide
you
with
the
software
stack
that
basically
avoid
this
kind
of
of
issues.
B
If
you
talk
about
the
data
science
domain,
as
I
said,
there
is
a
for
example,
the
case
of
having
the
training
on
gpu
or
cpu,
and
you
want
to
know
which
is
the
most
performant
software
stack
or
in
the
inference
part.
If
you
want
to
deploy,
for
example,
on
the
on
some
edge
devices,
and
you
have
different
combination-
you
don't
know,
for
example,
which
of
the
which
software
stack
is
more
performant
for
the
different
case.
So
this
kind
of
question
will
be
answered
by
projectors.
B
Let
me
move
ahead.
This
is
one
of
the
example
of
the
metrics.
Let's
say
that
we
collect,
for
example,
the.
As
I
said,
we,
we
started
focusing
on
the
python
ecosystem
and
we
look
at
what
is
happening
on
the
mainly
on
the
pipeline
index.
In
from
this
slide,
so
to
see
how
fast
or
how
many
packages
are
released
per
day,
because
this
is
very
important
for
the
system
to
learn
on
these
type
of
packages,
so,
as
I
said,
project
tried
to
answer
this
question
for
the
selection
of
the
software
stack.
B
So
the
first
thing
that
needs
to
learn
is
which
python
packages
exist
and
it
needs
to
learn
observation
from
these
packages.
So
we
need
to
know
how
many
are
released
in
order
to,
for
example,
target
a
certain
level
of
packages
that
needs
to
be
learned.
We
said
every
day,
so
we
can
always
be
up
to
date
for
the
user
to
receive
the
most
updated.
B
The
software
stack,
of
course,
so
this
kind
of
metrics
are
one
example
of
indicators
that
you
need
to
take
into
account
for
your
in
this
case,
for
project
thought
that
might
help
in
the
in
some
decision,
also
in
terms
of
investment,
and
I
don't
know
new
resources
that
are
required.
So
in
this
way,
when
you
take
the
decision,
you
will,
you
are
using
the
data-driven
engineering
way
in
in
order
to
select
to
make
some
decisions.
B
So
how
does
tot
is
able
to
answer
of
this
question?
This
is
a
list
of
observations
that
are
stored
in
the
knowledge
graph,
so
project
thought
has
different
services
and
components
that,
first
of
all
start
to
collect
the
knowledge
regarding
all
these
packages.
As
I
said
so,
there
are
different
type
of
info
observation.
Let's
try
to
go
through
some
of
them.
B
So
there
are
different
types
of
solver
that
thought
contain
and
for
each
of
them
we
want
to
learn
if
that
package
is
going
to
be
installed
or
not.
So,
as
you
can
see,
this
kind
of
observation
allow
thought
to
provide
the
recommendation
in
terms
of
the
packages
that
best
fit
your
runtime
environment.
B
Second,
is
the
performances
so
on
the
application
stack
level
project
thought.
Has
these
two
components?
If
you
talk
about
the
performance,
there
are
dependency
monkey
and
the
moon
dependency.
Monkey
is
a
component
that
is
able
to
browse
through
the
dependency
graph
and
create
combination
of
the
depend
of
the
dependencies
that
are
available
for
a
certain
package
and
submit
them
to
a
moon.
A
moon
is
a
service
that
is
able
to
run
some
specific
performance
indicator.
B
We
call
so
some
specific
script
that
needs
to
be
run
in
order
to
gather
performance,
and
we
do
this
for
a
different
type
of
ml
frameworks
or
tensorflow
pytorch,
and
we
can
do
that
also
for
other
types
of
layer
in
the
software
stack.
So
if
you
go
at
the
python
interpreter
level,
for
example-
and
this
kind
of
observation
are
all
stored
in
thought
in
in
order
to
provide
the
recommendation
in
terms
of
performances
for
your
software
stack,
so
should
I
use
pytorch
or
tensorflow
for
this
kind
of
application?
B
B
At
the
package
level,
we
collect,
as
I
said,
things
related
to
security,
so
we
have
a
cve
that
comes
from
one
specific
source
in
terms
of
the
python
ecosystem
and
we
also
created
some
security
analyzers
using
some
packages
like
bandit
that
are
able
to
identify
vulnerabilities
inside
the
packages
and
we
store
those
information
and
in
order
to
be
used.
If
the
user
has
some
requirements
in
terms
of
security
yeah.
Also
there
is
packages
tracked,
so
we
are
also
able
to
gather
observation
regarding
the
api.
B
So
if,
in
order
to
learn,
if
that
package
is
going
to
run
or
if
there
are
any
api
incompatibilities,
that
would
prevent
that
application
to
run
on
a
specific
environment-
and
this
is
the
third
class,
so
what
we
call
source
code
meta
information,
but
if
you
think
of
all
the
packages
that
are
available
on
open
source,
this
kind
of
observation,
our
focus
on
the
health
of
a
project.
So
we
want
to
know
if
I
should.
B
B
Now
we
are
going
to
see
how
do
we
provide
this
this
recommendation?
So
what
kind
of
tools
are
created
and
why
the
why
there
are
all
these
different
type
of
integrations?
So,
let's
start
with
the
cli
tool.
So
let's
say
that
I
am
in
this
case
a
developer,
and
I
want
to
start
my
application
and
I
want
to
have
some
recommendation
with
the
most
performance
software
stack.
B
So
tot
created
the
one
of
our
integration.
That
is
a
cli
tool
which
is
called
the
tamos
that
you
can
basically
install
simply
doing
pip
install
tamos
in
python
and
ask
for
thomas
advice,
and
this
kind
of
integration
is
able
to
provide
with
the
zip
file
lock
that
you
can
install
and
use
for
your
application.
Why?
I
said
pip
file,
lock,
dot
is
also
very
focused
on
the
dependency
management
work
and
it
cares
a
lot
about
the
rep,
the
possibility
of
reproducible
build.
B
So
if
you
want
to
share
your
work,
if
you're
a
data
scientist
or
if
you're
a
developer-
and
you
want
to
share
this
kind
of
code-
you
need
to
provide
not
only
the
direct
dependencies,
but
you
need
to
have
a
locked,
a
pin
down
software
stack
to
show
everything
related
to
that
to
the
software
stack,
so
not
just
the
direct
one,
but
also
the
transitive
one.
The
ashes-
and
this
is
one
of
the
example
and
why
we
started
to
use
a
similar
files
to
provide
to
the
users.
B
The
second
integration
is
more
focused
on
the
data
science
world.
So
if
I,
in
this
case,
I
am
a
data
scientist
and
I'm
starting
to
work
on
my
jupiter
notebooks
or
in
I
work
on
jupiter
lab
notebooks,
and
we
want
to
provide
a
tool
that
basically
offload
or
make
easy
the
life
for
data
scientists
that
do
not
need
to
worry
on
performances
or
dependency
issue,
because
there
is
a
tool
that
can
help
them
in
make
this
easy
for
them.
And
that
is
what
this
kind
of
extension
in
project
thought
has
been
created.
B
That
is
something
very
important.
If
you
want
to
allow
others
to
reuse
your
work,
another
type
of
integration
is,
we
have
github
apps.
One
of
them
is
the
cab
head
one,
so
it'll
it
run
as
a
check
run.
Every
time
you
open
up
request.
There
is
a
cabinet
that
is
able
to
ask
tot
for
any
advice
on
your
put
request.
B
If
there
are
any
changes
on
your
software
stack
and
give
you
some
suggestion
or
recommendation
on
what
is
important
or
what
type
of
recommended
or
of
justification
are
provided
for,
your
software
stack
another
important
integration
that
we
have
is
bots.
B
As
I
said,
we
want
to
reduce
actually
the
work
of
the
hume
of
humans
in
terms
of
this
kind
of
problems
related
to
dependencies,
so
that
they
can
basically
focus
on
other,
more
important
topics
and
the
cechetta
or
cabbage
in
this
case
is
an
application
that
you
can
install
on
github
and
is
able
to
keep
your
dependencies
up
to
date.
So,
as
I
show
the
beginning,
there
are
different
ways
that
we
want
to
help
and
what
the
cabbage
is
going
to
do.
B
If
you
imagine
starting
working
on
your
jupiter
notebooks
or
on
your
local
environments,
and
then
you
push
something
to
git.
So
if
you
think
about
the
github's
principle,
the
software
stack
that
is
stored
on
your
github
repository
then
is
going
to
be
basically
maintained
by
this
bot
in
terms
of
dependencies.
B
So
that
imagine
that
there
is
a
cve
appearing
at
some
at
some
point
related
to
the
software
stack.
Then
this
service
is
of
this
bot
is
using
tot
service
in
order
to
provide
you
with
an
updated
software,
stack
that
eliminates
that
dependency,
that
which,
which
has
the
vulnerability
and
in
this
way
you
don't
need
to
worry
about
this
kind
of
problems.
B
Regarding
deployment
or
using
of
images,
source
to
image
is
well
known
and
thought
to
one
wanted
to
provide
an
integration
that
can
help
also
in
in
this
kind
of
tool.
So
if
you
imagine
that
I'm
starting
to
build
my
image
in
order
to
release
it
on
query
and
so
that
they
can
be
used
to
be
deployed
on
some
environment,
then
the
added
value
of
adding
something
like
that
is
to
make
sure,
for
example,
that
your
environment
is
specifically
targeting
some
environment.
B
So
if
I
ask
for
cuda
or
some
specific
random
environment
and
the
system
is
not
using
this
kind
of
a
runtime
environment,
then
top,
for
example,
will
make
the
build
fail,
because
there
are
checks
which
are
not
satisfied
so
you're
asking
for
a
specific
environment.
B
But
you
miss,
for
example,
the
buddha
libraries,
and
in
this
case
this
would
basically
help
everything
in
a
pipelines
and
to
automate
all
this
all
this
work,
and
the
last
thing
is
something
that
we
recently
also
released,
which
is
an
optimizing
deployment
pipeline
and
what
we
want
to
do
in
this
case.
So
tot
is
able
to
provide
you
with
an
optimized
software
stack.
B
So
this
is
just
a
recap
of
what
I
just
showed
you
in
terms
of
the
integrations
and,
as
you
can
see,
they
cover
basically
all
that
all
the
tool
that
someone
can
use
or
they
can
use
in
if
you
want
to
deploy
on
on
a
cloud
environment
or
if
you
want
to
work
of
your
of
your
notebooks,
and
this
also
helps
a
lot
in
terms
of
gitobs.
B
B
Beside
the
software
stack,
of
course,
that
will
help
or
with
we,
that
the
user
can
use
in
order
to
provide
the
inputs
in
terms
of
the
requirements
that
it
has,
for
example,
if
you
think,
in
terms
of
recommendation
type
thought
as
a
different
type
of
recommendation
that
can
be
provided.
So,
as
I
said,
if
you
want
just
your
latest
and
greatest
software
stack,
that
is
something
that
you
can
ask
in
terms
of
recommendation
type,
if
you're
interested
in
having
a
secure
security
recommendation,
because
you
want
to
deploy
in
product.
A
B
B
B
The
internet
always
be
the
end.
B
Sorry
so
I
am
I'm
back,
I
was
back
here.
I
I
hope
when
I
stopped
and
I
just
I
don't
see
your
slides.
B
B
B
So,
as
I
was
saying
in
order
to
interact
with
the
top
services,
you
need
to
provide
also
a
configuration
file
which
is
stated
in
this
thought
yaml
file,
and
in
here
you
can
basically
describe
the
type
of
recommendation
that
you
want
sorry.
So
when
we
talk
about
recommendation
right,
yes.
B
So,
as
I
was
saying
in
order
to
interact
with
the
thought
service,
there
is
a
configuration
file.
This
is
a
dot
yaml
file
that
needs
to
be
provided,
and
in
this
configuration
file
you
can
basically
select
the
runtime
environment
that
you
want
to
use
for
your
application
and
the
thought
service
will
use
in
order
to
provide
you
with
with
the
recommendation
in
terms
of
recommendation
types,
there
are
different
kind.
So
if
you
want
the
latest
and
greatest
software
stack,
you
will
just
use
the
recommendation
type
latest.
B
Then
you
would
use
different
type
of
recommendation,
and
this
is
just
one
of
the
example
you
can
provide
more
than
one
runtime
environment
if
you
want,
if
you're
interested
in
different
type
of
recommendation-
and
here
is
also
another
important
thing-
to
notice-
that
the
requirement
format
is
not,
let's
say
only
for
prepend,
but
thought
is
able
to
deal
with
all
the
dependency
management
tools
that
are
available
in
in
the
python
ecosystem.
B
So
we
don't
constraint
anything
in
terms
of
the
of
what
the
the
user
want
to
use,
but
we
are
able
to
provide
the
optimized
software
stack
or
the
recommended
software
stacks
in.
In
any
case-
and
this
is
another
feature
that
we
recently
introduced
and
the
use
of
the
overlays
for
those
of
you
that
are
familiar
with
the
argo
cd
or
the
use
of
overlays-
is
something
that
is
important
if
you
think
of
deploying
different
types
of
applications
on
different
environments.
B
B
Runtime
environment
that
you
selected
that
would
match
the
overlays
that
you
would
use.
For
example,
if
you
think
of
argo
cd
in
terms
of
a
deployment
of
your
cluster
in
your
cluster,
then
this
type
of
recommendation
can
be
provided
to
the
also
for
the
deployment,
and
in
this
case
the
image
that
is
created
out
of
this
random
environment
would
be
selected
for
for
this
kind
of
environment.
B
B
That
needs
to
be
created
in
order
to
help
not
only
the
developers
but
also
who
take
the
decision
in
terms
of,
if
you
think
of
of
investing
in
some
new
resources,
you
need
to
also
have
a
way
to
make
decision
out
of
this
data.
That
is
the
goal
of
having
data-driven
engineering.
So
the
first
definition.
If,
if
you
want
to
provide
a
service
you,
you
will
be
familiar
with
this
kind
of
a
definition.
B
B
But
in
order
to
why
this
is
also
important
is
that
if
you
have
anything
that
needs
to
be
improved
the
system,
these
are
indicators
that
immediately
help
you
identifying
the
issue
in
your
system
that
basically
trigger
all
the
tasks
and
work
that
needs
to
be
done
or
the
definition
or
the
request
for
new
resources.
In
order
to
get
to
that
objective.
That
you
want-
and
this
is
just
a
general
introduction
of
the
service
level
in
general,
but
what
is
important
is
always
to
have
all
these
metrics.
B
You
need
to
observe
what
is
in
your
system
and
in
case
of
thought,
we
created
the
several
type
of
metrics
that
focus
on
the
different
layers,
so
we
we
get
the
one
from
openshift
in
terms
of
the
cluster,
but
the
application
level.
B
We
have
everything
that
is
used
in
thought,
so
all
the
services
needs
to
be
monitored
and
we
need
to
define
some
service
level
indicators
in
order
to
make
decision
on
what
needs
to
be
done
in
in
the
project,
and
this
is
just
a
list
of
the
of
the
technologies
that
we
use.
Of
course,
we
run
on
openshift
and
we
use
promito
centenos
to
scrape
all
our
metrics
for
the
different
components
and
we
see
every
matrix
in
grafana.
B
We
also
set
alarms
out
of
this
in
order
to
make
this
the
the
team
aware
of
anything
that
is
going
wrong
with
the
system.
We
also
use
some
tools
from
the
open
data
hub
and
you
and
superset
in
order
to
not
only
use
this
metrics,
but
out
of
this
metrics,
we
make
analysis
of
them.
B
We
combine
them
in
order
to
define
this
service
level
indicators
that
are
really
used
in
order
to
make
decision
on
what
needs
to
be
done
in
your
system
and
kafka
and
argo
are
two
of
the
technologies
that
are
also
using
project,
thought
and
kafka.
We
use
it
for
sen
or
an
event
that
is
happening
to
send
messages
across
the
components
and
argo
workflows
in
this
case
is
what
we
use
in
in
the
services.
B
B
B
This
kind
of
component
is
able
to
react
to
all
the
events
that
come
through
it,
not
just
for
the
user
request,
but
also
for
anything
that
is
happening
on
the
learning
side
of
the
thought
and
based
on
some
condition.
It
would
basically
decide
what
needs
to
be
done
in
terms
of
advisor
as
you
imagine.
What
is
going
to
happen
is
that
the
advisor
service
or
the
advice
workflow
in
this
case,
our
argo
workflow,
would
be
scheduled
and
inside
of
argo
workflow,
there
will
be
other
tasks
that
would.
B
React
to
other
events
and
make
the
system
continuously
work
on
any
user
request
and
why
this
is
important.
If,
as
I
said,
you
need
to
know
that
the
type
of
technology
that
you're
using
and
define
some
specific
metrics
for
each
of
them,
because
in
that
way
you
can
define
some
indicators
out
of
this
metrics
and
if
you
think
about
the
argo
workflow,
for
example,
you
can
get
some
of
the
interesting
indicators
are
the
percentage
of
successful
argo
workflows
or
the
latency
in
terms
of
these
services
that
are
used.
B
Sorry
in
terms
of
this
argo
workforce
that
are
run
another
example
is
the
learning
one
so,
as
I
said
before,
we
have
this
still
with
investigator
central
component
that
react
to
what
is
happening
or
to
the
events
that
are
are
taken
from
kafka.
B
The
observation
that
I
require
for
thought
to
provide
advices-
and
this
is
all
automated
inside
the
thought
and
therefore
you
need
to
have
a
way
to
monitor
what
is
happening
and
to
immediately
identify
if
there
is
any
kind
of
service
that
is
reducing
your
objectives
and,
in
that
case,
decide
what
are
the
action
to
be
taken
if
more
resources
are
required?
If
some
technology
or
some
specific
components
is
not
performing
well,
then
we
need
to
basically
take
decision
on
putting
some
resource
or
some
of
the
team
working
on
that.
B
B
We
have
many
type
of
integration
and
we
want
to
know
which
of
them
is
most
used
in
this
way
we
can
decide
or
we
can
prioritize
which
of
the
features
we
need
to
focus
on
first,
the
time
that
are,
you
spent
on
this
type
of
integration.
So
you
want
to
know
how
much
time
the
service,
from
the
initial
request,
from
the
user
to
going
back
to
the
user,
how
much
time
is
taken
if
we
are,
if
we
need
to
improve
or
change
something
in
any
of
the
components
in
order
to
make
this
indicator
lower.
B
In
this
case,
we
talk
about
these
target
workflows
and
we
need
to
know
the
quality
and
the
latency,
so
the
percentage
of
successful
one,
the
time
that
they
are
the
time
that
is
spent
on
each
of
these
workflows
and
their
acceptance
recommendations.
So
if
you
think
about
some
of
the
bots
that
keep
dependencies
up
to
date,
then
we
want
to
have
also
an
indicators
in
terms
of
acceptance
rate.
B
So
if
we
provide
this
kind
of
advice,
then
are
they
accepted
and
if
they
are
accepted,
we
want
to
know
that,
because
in
this
way
we
can
also
prioritize
some
changes
or
something
that
would
improve
the
the
user
experience
and
why,
for
example,
the
acceptor
race
is
not
so
high
and
then
we
move
to
the
last
part.
So
you
had
an
idea,
what
is
project
taught
and
all
the
different
integrations,
and
then
we
move
to
the
service
and
metrics
level.
B
So
what
we
need
to
gather
in
terms
of
observation
to
create
these
indicators
that
help
basically,
the
team
taking
decision
and
setting
objectives
in
terms
of
what
the
system
needs
to
be
is
to
do
in
a
way
to
have
these
agreements
that
you
can
provide
to
the
users,
and
this
is
some
example.
If
you
go
to
the
low
to
the
left
side,
you
see
that
there
is
a
one
graffana
dashboard.
B
This
graphic
dashboard
is
basically
looking
at
the
number
of
workflows
that
we
are
running
on
the
namespace
in
a
certain
namespace,
and
in
that
case
there
was
too
many
workflow
schedule
at
the
same
time
and
the
system
was
not
able
to
deal
with
all
of
them
so
with
the
we
had
to
reduce
in
that
case,
because
there
was
a
problem
with
the
with
some
of
the
component.
B
In
that
case,
we
had
to
reduce
the
number
of
workflows
in
order
to
keep
the
system
running,
so
we
didn't
want
to
stop
the
system,
but
we
could
reduce
a
bit
what
was
doing
in
order
to
using
this
data-driven
engineering
solution
and
what
you
see
on
the
right
side
instead
is.
B
So
in
order
to
utilize
at
the
maximum
all
the
resources
of
that
namespace,
we
basically
use
this
data
to
decide
what
needs
to
be
done
regarding
the
learning
rate.
So
this
is
one
example.
There
are
two
type
of
learning
in
in
thought:
we
have
the
solver
one
which
are
the
basic
for
any
type
of
recommendation,
and
then
we
have,
for
example,
the
security
one
that
needs
to
be
also
included
in
order
to
provide
the
security
recommendation,
and
there
is
a
minimum
service
level
objective
that
we
set
this.
B
This
type
of
chart-
oh,
I
forgot,
but
the
measurement
unit
in
this
case
is
related
to
the
minutes.
So
this
is
what
is
happening,
how
fast
we
are
learning
per
minute,
so
the
the
scale
on
the
y-axis
is
basically
the
number
of
packages
that
we
learn
per
minute,
and
that
is
what
we
target,
for
example,
is
at
least
10
packages
on
average
on
a
certain
period.
B
In
this
case,
we
look
at
a
weekly
basis,
but
we
can
see
if
the
system
is
able
to
not
only
the
application,
but
in
this
case
we
can
also
see
if
there
is
some
issue
on
the
on
any
of
the
components
or
on
the
network
or
whatever
is
happening
in
the
system.
But
these
are
important
indicators
that
you
we
help.
The
team
basically
immediately
identify
if
there
is
something
that
needs
to
be
changed
in
the
system.
B
This
is
something
that
we
recently
also
changed,
so
that
that
request.
I
show
you
before,
where
we
increased
the
number
of
resources,
because
we
we
could
imp
in
increase
them
basically
led
to
an
increase
on
the
number
of
packets
that
we
could
learn
per
day,
but
to
think
about
what
was
happening
before
this
period.
B
Now
we
are
basically
able
to
almost
double
what
is
happening.
What
is
learning
the
system-
and
this
is
another
example
of
how
we
use
all
this
data-
to
easily
identify
something
that
can
be
improved
in
the
system
or
that
needs
to
be
solved,
and
this
is
one
example
where
this
is
the
quality
of
the
workflows.
So
each
of
these.
B
This
is
what
they
say
in
terms
of
latency,
so
we
can
see
for
each
of
the
services
that
we
have
or
for
each
of
the
components.
Sorry
in
this
case
that
we
have
that
the
latency
that
is
taken
for
each
of
them,
so
if
on
the
x-axis
you
see
the
time
and
on
the
left
side
accesses
the
latency
that
is
divided
in
different
section
or
heat
maps
in
this
case,
and
each
of
them
show
the
percentage
of
workflows
that
fall
in
one
of
the
in
one
of
these
buckets.
B
So
if
you
see
that
the
system
is
basically
going
up
on
the
right,
there
are
some
of
the
workflows
that
are
taking
less
and
less,
and
this
will
show
us
that
the
system
is
getting
better
in
terms
of
speed
of
the
execution
of
this
workflows,
and
in
this
way
we
we
know
if
the
system
is
able
to
provide
also
a
certain
service
with
a
certain
latency
in
terms
of
integration.
As
I
said,
we
have
all
this
different
integration
that
I
explained
before,
and
we
also
target-
or
we
also
monitor
the
reuse.
B
B
With
that,
I
think
I
can
conclude.
This
is
some
of
the
way
that
you
can
reach
us.
So
we
have
an
open
channel.
Google
chat
that
everyone
can
join.
We
have
a
thought
station
youtube
channel
where
you
can
see
everything
that
we
do.
There
are
all
our
scrum
and
demos
that
we
collect
there
in
order
for
everyone
to
be
able
to
see,
we
have
a
website
the
top
station
ninja,
where
you
can
find
many
documentation
regarding
thought
and
all
the
components
if
you
are
interested
in
the
recommender
system.
B
A
Right,
thank
you
so
much
francesco.
So
this
is
a
really
great
presentation
and
we
do
have
one
question
that
I've
seen.
So
is
this
integrated
or
a
part
of
redhead
insights?
A
B
No
project
taught
is
a
community
project.
Open
source
is
part
of
the
ai
center
of
excellence
and
what
the
goal
of
what
the
thought
wants
to
do
is
to
recommend
the
software
stack.
So
there
are
all
these
different
integrations
and
there
is
no.
I
mean
at
least
not
something
that
we
do
with
insight
at
the
moment,
not
inside
project
thought.
A
Okay,
thank
you.
I
don't
see
any
other
questions.
Chris.
Are
there
any
other
questions
from
maybe
twitch
youtube.
A
I
guess
not
so
interesting:
we
can
just
wrap
this
up,
so
we
don't
have
any
other
questions
great.
So
this
is
a
really
great
presentation.
Thank
you
so
much-
and
I
just
want
to
say
thank
you
to
chris,
also
for
using
this
session
and
making
the
live
stream
seem
less.
I
know
we
had
a
few
hiccups
here
and
there,
but
hey
that's
technology,
and
it's
all
right.
Thank
you,
everyone
for
watching
this
session,
and
with
that
I
guess
we
can
close
it
up
close
it
up.