►
From YouTube: Magento MSI Open Demo. December 21, 2018
Description
Agenda:
- Demonstration of Elasticsearch Support on Custom Inventory Stocks (fixes from the last presentation) - @Slava Moskalyuk
- Improvement of Inventory Migration among Sources in Async mode - @Slava Moskalyuk
- Improvement of Inventory Migration among Sources. Memory leakage fix - @nuzil
- Fix of Performance degradation on Default Stock by @Stepan Furman
- Support Inventory Web API Test run on Travis build by @vnayda
- Dev Docs update by @Lori Krell
- Test Coverage update by @Tom Erskine
A
Looks
like
it's
time
to
start
so
today
we
have
a
bunch
of
updates
for
for
you
and
who
helps
like
a
rainy,
Christmas
mood
and
we
have
a
Santa
on
our
today's
meeting.
So
there
are
a
bunch
of
presents
which
we
want
to
give
you
today
and
today
we
will
start
from
from
elastics
or
support.
We
already
start
demonstration,
elastics
or
support
in
our
previous
emo,
but
there
were
some
issues
with
the
support
and
the
pull
request
was
not
merged
until
today,
and
actually
today,
actually
right
before
this
meeting,
we
finalized
the
request.
B
B
B
B
B
B
B
B
B
B
B
A
I'm,
probably
we
have
to
mention
that
we
as
it
was
requested
in
a
previous
demo.
We
made
the
default
mode
for
the
inventory
source.
Migration
is
asynchronous
so
before
that
it
was
synchronous
mode,
but
taking
into
account
that
it's
supposed
to
be,
it
is
almost
for
sure
supposed
to
be
pretty
big
operation
and
we
don't
want
to
introduce
any.
A
B
A
A
A
D
D
So
the
initial
issue
was
that
yes,
eager
or
already
explained
journey
the
error
when
we
transfer
this
items
was
kind
of
dis
one.
So
the
photos
are
all
allowed
memory.
Size
I
even
could
not
reproduce
it
locally,
because
I
had
no
limit
set
and
it
just
killed
my
Apache
when
I
try
to
do
it,
so
there
was
definitely
the
problem
somewhere
there
and
then
I
jumped
in
and
basically
he
yep
that
this
full
request,
which
was
already
merged,
is
a
work
that
was
done
in
last
day.
D
A
D
D
Here
correct,
so
the
problem
was
in
a
plugin
that
was
on
top
on
top
of
this
operation
and
what
a
journal
is
did
I
found
here
this
true
strange
for
riches,
which
was
a
bit
strange
for
me,
cause
generally
those
types
it's
getting
the
OL
sq
s--
soldier
needs
it's.
This
matters
receive
the
SQ
which,
which
would
transfer
it
when
we
made
a
mess
section
and
what
is
doing
is
returning
back
the
same
excuse
list
but
with
the
product
types
on
it.
D
So
you
see
that
this
is
Q,
had
simple
product
type
or
configurable
or
whatever
else,
and
then
what
is
it
doing
here?
It's
going
through
each
of
those
skills
and
then
inserting
goes
through
each
of
those
rescues
again.
So
you
can
imagine
if
you
do
it.
1200
products
x,
1200,
its
requests,
you'll
have
huge
amount
requests
which
is
try
and
journey
to
read
the
database.
True
in
said,
the
database
job
data
database
and
all
those
operation
journal
just
kill
the
moving
with
a
big
amount
of
data
and
yeah.
D
D
A
D
D
A
A
I
will
use
the
time
way
when
well
Alex,
making
all
the
preparation.
So
actually
the
main
issue
with
this
logic
was
was
incorrect.
A
lot
logically
incorrect
units
of
of
the
idea
of
our
service
contract
roll
it
by
the
way
you
know
you
probably
know
I
can
share
my
screen,
yep
well
Alex,
making
all
that
play.
C
A
Will
share
my
screen
and
I'll
just
use
this
time
to
make
small
announcement.
So
yesterday
we
published
our
service
contracts
API,
and
here
we
listed
all
of
the
recommendation
how
this,
how
the
service
layer
API,
should
look
like
how's
it
supposed
to
be
organized,
so
they
support
decode.
What
what
that
data
types
should
be
used
there
and
many
interesting
stuff.
It
is
not
another
long
Creegan,
but
actually
this
all
these
guidelines
actually
absorb
all
the
experience
and
all
the
best
practices
we
had
in
magenta
and
especially
on
msi,
because,
for
example,
many
many
items.
A
Many
bullet
points
in
this
list
could
be
found
currently
just
in
magenta
ms:I,
and
this
I
can
get
go
to
this
list
because
because
we
tried
tried,
particular
artists
on
MSI
track
and
the
scene
that
it
is
much
much
easy
to
handle
and
much
easier
to
work
with
this
with
some
particular
recommendation
regarding
the
services.
So
what
I
wanted
to
say
is
that
one
of
the
recommendation
regarding
the
service
contract
that
service
contract
supposed
to
work
is
a
bunch
of
data.
A
So
we
don't
want
our
services
to
work
on
the
level
of
a
single
entity,
but
want
all
of
them
to
work
with
it
with
an
array
of
entities,
and
actually
this
is
pretty
similar.
Stuff
happens
with
this
source
migration.
So
we
have
a
bunch
of
entities
which
need
to
be
migrated
from
one
source
to
another
and
the
main
point
of
degradation
was
not
actually
the
migration
of
entity,
so
not
not
the
migration
of
quantities,
but
migration
of
the
configuration
data
assigned
to
this
quantity.
A
So
you
know
that
we
have
the
inventory
on
each
particular
source,
which
we
call
this
force
item
quantity
and
the
one
with
that
we
we
have
the
source
item,
notification
and
the
source
item
adjudication
is
the
functionality
which
supposed
to
notify
merchants
when
the
some
particular
product
is
running
out
on
own
particular
warehouse.
So
this
functionality
specifies
his
first
hold
and
if
distress
code
is
we
see
the
distress
code.
A
The
the
merchant
is
notify
about
about
that
particular
product
is
about
to
be
sold
out
and
currently
that
functionality
was
implemented
in
a
way
that
it
was
a
plugin
on
this
migration
process
and
the
plugin
accepted
the
airing
of
entity,
an
area
of
our
source
items
and
the
plugin
process
them
one
by
one.
So
we
have
pretty
fast
operation
of
migration
of
a
bunch
of
source
items
like
this
is
only
query
which
just
make
makes
migration
of
one
of
1,200
items
and
on
one
table
to
another,
but
a
lot
in
the
middle
of
this
process.
A
We
have
the
source
item,
configuration
migration.
We
start
to
migrate
each
record
by
record
for
for
each
corresponding
record
for
our
source
item.
That's
why
actually
on
this
operation,
we
had
this
degradation
not
not
on
an
operation
of
quantities,
migration,
but
on
an
operation
of
quantity,
configure
configuration
migration.
That
was
the
real
issue
for
us,
and
the
issue
is
based
on
on
the
fact
that
plugin
ization
was
not
correctly
so
this
plugin
did
not
did
not
satisfy
our
service
service
layer
guideline,
so
I
should
be
making
the
change
Alex.
D
Yep,
so
basically,
that's
initial
implementation,
which
I
showed
it
before
that
you
hear
those
four
reach
inside
the
forage
and
then
getting
from
Dollar
region
from
database
inside
of
those
two
loops
and
then
also
updating
and
also
inside
and
yeah.
That's
already
refers
the
page,
so
if
I'll
do
the
transfer
now
it
will
ask
me
fuels
for
the
sources.
D
G
F
D
A
H
A
A
F
One
actually
one
question
on
it
so,
based
on
how
you've
done
the
implementation
to
solve
the
performance
I,
don't
think
this
would
make
a
difference,
but
in
the
mass
transfer
there's
a
checkbox
for
whether
after
the
transfer
is
completed,
we
want
to
unassign
from
the
source.
Do
you
expect
much
performance
difference,
I.
F
A
A
By
the
way
we
have
a
dedicated
label
for
performance
track
and
by
the
way
you
can
so
all
the
all.
The
issues
which
appear
somehow
related
to
performance
are
marked
with
this
label
so
that
you
can
see
that
most
of
these
issues
already
been
fixed.
That's
why
you
see
another
label
into
the
three
develop
branch
which
it
actually
means
as
me
means
that
this
you
should
already
fixed
and
merge
into
this
radula
branch.
So
our
main
line
and.
A
But
after
that
it
appears
it
appeared
that
we
still
had
some
problems
on
a
median
profile,
because
just
we
were
dealing
in
this
case
with
a
big
amount
of
data
and
so
that
the
changes
introduced
a
week
ago.
We
are
not
change
changes.
We
are
not
enough
for
us,
that's
why
we
continue
our
investigation
and
actually
step
on
continued
the
investigation,
and
he
found
several
other
issues
and
these
issues
are
delivered
in
the
scope
of.
Let
me
show
you
the
scope
of
pull
request,
request.
A
1950
and
I
see
and
as
you
can
see,
this
pull
request
is
also
nourished
in
our
to
the
three
develop
branch.
So
now,
I
will
show
you
fresh
measurements,
so
these
measurements
done
so
we
still
have.
We
still
have
our
performance
acceptance
built
as
red
one,
but
actually
longer
is
that,
like
we
continue
working
on
on
additional
performance
tests,
but
even
now
you
see
that
performance
degradation
on
small
profile
is
barely
exceeds
10%
on
most
scenarios,
for
example
like
search
or
category
page,
and
now
what
is
more
important?
A
So
before
this
demo
median,
we
rerun
the
performance
as
acceptance
built
on
median
profile
and
also
we
got
pretty
similar
data,
which
is
pretty
good
because
because
a
week
ago
we
had
actually
preaches
the
same
data
for
small
profile,
so
the
the
degradation
was
about
from
from
five
to
ten
percent,
but
on
medium
profile
the
values
were
much
worse,
so
it
was
like
200
to
300
percent.
So
now
the
values
are
the
same.
This
is
actually
means
that
what
it
means
for
us.
A
It
means
that
we
fix
an
issue
on
the
database
level
and
now
we
have
to
find
the
root
cause
on
the
application
level,
because
we
introduced
quite
a
lot
of
additional
logic.
A
business
object,
additional
validation
like
a
chain
of
validation
for
isn't
stock,
isn't
store,
etc,
so
so
that
it
means
that
it
is
good
that
the
degradation
really
value
between
a
small
profile
and
medium
are
the
same.
A
A
I'm
I'm
really
good,
read
in
Dutch
names,
but
this
ticket
were
created
created
by
a
guy
from
community
who,
who
mentioned
that
we
can
introduce
a
caching
layer
for
our
sq
and
the
product
ID
resolving
mechanism,
and
that
we
have
quite
a
lot
of
this
idea
to
ask
you
calls
and
attempts
early
this.
The
caching
layer
on
this
site
can
also
introduce
performance
improvement.
So
we
will
be
continued
to
investigate
in
this.
The
I
mean
performance
degradation
in
general,
but
what
is
really
good
for
now?
A
A
You
know
that
we
have
the
proposal
document
and
this
proposal
document
could
be
found
on
our
big
page,
and
here
we
describe
like,
like
the
store
pick
up
should
be
implemented.
Also,
we
had
a
grooming
meeting
and
this
grooming
meatiness
also
the
link
to
the
groovy
meeting,
also
could
be
found
on
this
page,
but
for
today,
for
today
meeting
we
already
have
the
first
mock-up
of
look
and
feel
for
this
store
pick
up
and
we
were
we
collaborated
with.
You
began
bond,
who
is
our
UI
unique
specialist
and
he
he
prepared
this
mock-up.
A
So
you
you
can
introduce
this
switching
and
the
top
at
the
top
of
the
shipping
page
and
here,
because
one
of
the
main
concern
was
to
avoid
a
process
of
feeling
this
you
can
address
if,
if
the
store
pickup
is,
if
it
shows
for
the
store,
pickup
is
at
the
bottom
of
page,
because
this
will
mean
that
all
the
data
field
here
would
be
unnecessary.
That's
why
we
introduced
at
the
top
of
the
page,
this
vision
and
now
like
a
store
pickup.
A
A
E
Hello,
everyone
nice
to
see
you
again
and
today,
I.
Don't
president
small
improvement
smooth
but
important
related
to
changes
nowadays
deals
as
we
mentioned.
You
know
we
need
to
help
some
bills
related
to
other
functional,
tester,
Travis
and
absolutely
will
be
not
for
extra
of
the
for
contributors
because
guys
don't
have
early
feedback,
early
status
of
requests
and
we
force
it
with
the
same
problem
in
real
track
and
for
us
it
was
absolutely
critical
issue
because
referral
it
mostly
about
HTTP
requests.
E
So
we
resolved
this
issue
and
today
I
ported
fix
for
MSI
from
the
side.
People
general
is
a
lot
is
know
a
lot
of
changes,
some
small
changes
in
Travis
configuration
and
in
some
installation
streams
you
see,
I
introduced
it
one
more
job
and
generally
need
to
think
how
to
generalize
all
these
processes,
because
we
have
separate
fix
for
graph
coil
and
we
have
separate
fix
for
MSI
and
I
will
think
about
how
to
run
all
of
the
functional
tests
for
requests.
E
So,
as
a
result,
you
see
that
we
have
green
build.
We
have
more
than
one
someone
content
of
test
and,
after
Meldrum
of
this
request,
guys
need
to
take
two.
You
talk
how
it
also
resolved
in
the
sky,
I'd
be
functional
test
and
one
more
notice
from
my
site
simply
need
to
change
structure
of
Travis
built
from
the
site,
because
you
see
that
we
have
running
for
each
piece,
7.1
and
pitch
gift.
E
72
I'm
sure
that
we
need
to
do
this
because
we
have
the
same
deals
on
our
Jenkins
internal
fields
and
what
is
really
do
trail
is
to
provide
or
leash
back
as
quick
as
possible.
So
now
I
need
to
wait
in
to
twice
time.
Most
injuries
would
be
if
you
run
on
the
HP
7,
not
true.
So
generally.
That's
all
for
me.
Absolutely
simple
task,
but
I
believe
it
will
be
helpful
to
say.
A
Thank
you.
Thank
you
well
era.
It
was.
It
was
a
small
step
for
you
in
making
this
task
University,
and
it
was
a
giant
leap
in
MSI
to
to
help
our
contributors
to
provide
early
feedback
on
their
pool
request,
especially
visits
to
the
quest
relate
related
with
API
functional
tests,
so
I'm
right
now,
immersion
immersion.
Your
girl
see
that.
E
A
I
Hey
everybody:
okay,
let
me
go
ahead
and
share
so
quick
update.
Kevin
worked
pretty
hard
on
getting
some
reservations.
Content
added
to
the
dev
Docs
I'll
make
sure
this
is
also
linked
in
the
MSI
wiki
site.
It
has
a
really
nice
overview
of
reservations
how
they
work,
how
these
calculations
function,
including
an
example
or
two
I'm
working
on
some
diagrams
to
add
to
this
content
so
that
it'll
be
much
easier
to
review.
I
were
sent
some
really
great
info
from
a
contributor
and
we're
working
to
get
some
really
nice
diagrams
added,
also
for
other
documentation.
I
Updates
I
will
be
updating
the
wiki
guide,
with
new
content
for
the
work.
That's
in
progress
such
as
the
elasticsearch,
the
distance
based
algorithm
and
then
in
January
when
that
content,
when
the
packaging
does
occur,
the
actual
live
content
will
also
include
it.
Just
to
remind
everybody
where
you
can
find
some
of
this
info.
I
When
you
go
to
the
msi
wiki,
you
can
always
find
the
wikis
user
guide
here,
which
is
going
to
always
match
develop
branch,
and
then
the
live
content
is
also
linked
from
here.
So
it's
all
in
one
and
all
the
dev
Doc's
are
down
here,
I'll,
be
adding
the
reservations
link
here.
That's
it.
Unless
anyone
has
any
questions
or
needs.
A
J
J
Okay,
great,
we
should
have
that
there,
okay,
just
a
brief
update
for
today.
So
in
terms
of
testing
coverage
with
nd
year
activities,
there
has
been
a
large
increase.
What
we
have
in
terms
of
our
manual
tasks
coverage,
then,
is
this
current
breakdown
by
component?
This
is
all
the
tests
that
are
run
manually.
This
number
includes
part
of
the
automation.
This
is
the
full
539
number.
We
see
the
left
here
and
that's
an
increase
from
last
week,
so
there
is
ongoing
task
definition,
work
both
from
manual
testing
and
as
a
basis
for
automation.
J
The
automation
for
those
new
tests,
particularly
on
credit
memo,
is
in
progress,
but
not
through
a
pull
request.
Yet
this
one
I'll
just
look
at
briefly
if
you've
been
in
previous
calls,
you'll
see
this
is
from
December
5,
so
this
hasn't
changed
but
will
be
used
as
a
baseline
against
our
next
manual
regression
run.
So
no
changes
on
that
side,
automated
testing
and
currently
we
have
142-
was
called
out
against
the
395
a
manual.
J
Severity,
it's
s1
and
s0,
primarily
there's
s
zeros
that
still
need
automated,
so
we
want
to
bring
that
s0
number
up
and
the
next
PR
for
credit
memo
includes
s.
Zeros
that'll
bring
that
slice
the
pie
up
again
by
component
there's
no
change
here
from
last
week
either.
Unfortunately,
there
are
two
PRS
in
right
now,
but
this
haven't
gone
through,
so
we
weren't
able
to
bump
up
the
number
for
this
report,
though,
in
early
January
we'll
see
that
come
up
and
as
an
addition
to
reporting,
we
do
execution
again.
J
We
have
that
18
skip
number
which
we
should
maybe
behind
it
today
bring
down
by
about
six
there's
a
PR
that's
been
approved.
You
just
need
to
merge
through
for
that,
and
we
were
will
then
from
now
on
be
starting
to
trend.
These
numbers
were
obviously
looking
for
an
increase,
but
we
want
to
track
the
rate
of
that
increase.
Currently,
as
we
can
see
just
over
the
last
three
weeks
is
all
we're
reporting
on
for
now.
Obviously
this
will
sort
of
increase.
J
With
time
we
have
our
total
test
number
going
up,
based
on
the
not
emitted
tests
going
up
and
automated
staying
flat,
but
we
should
see
an
uptick
on
that
early
in
the
new
year.
So
not
a
huge
up
deer
for
this
week.
But
that's
where
we
stand
on
our
test
coverage.
We
have
any
questions,
I'll
take
them.
Otherwise,
that's
everything.
I've
got
I.