►
From YouTube: 2018-01-18 Mesos Developer Sync
Description
Agenda and Notes:
https://docs.google.com/a/mesosphere.io/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=drive_web
A
Yeah,
so
the
first
thing
on
the
agenda
was
the
suggestion
that
this
meeting
a
it's
nearly
in
person
anyway,
but
suggestion
that
we
have
methods
for
external
people
and
more
formally
invite
them
to
mesosphere
every
other
meeting
for,
like
a
two-hour
think
up
the
NH
suggested.
So
we
would
still
broadcast
it
by
zooming
but
yeah.
It
would
just
be
a
chance
for
people
to
come
all
sit
in
the
same
room
and
talk
about
various
topics.
What
do
you
guys
think
that
I.
B
B
Let's
see
what
would
it
be
every
other
meeting,
yeah
so
I
think
we
used
to
have
every
Thursday
used
to
be
and
I
have
your
name
was
people
to
come
together?
Eventually,
it
sort
of
I,
don't
think
it
was
this
coordinated
I
mean
we
could
do
every
other.
We
could
also
do
like
every
third
I
do
every
fourth,
but
it's
just
a
general
idea
like
I,
think
too
much,
then
people
aren't
gonna,
do
it
we
need
just
the
right
amounts
would
be
related,
I'm
willing
to
actually
try
of
someplace
sit
down.
B
Other
people
work
on
some
Docs
or
write
some
code
or
like
work
on
a
design
doc
or
whatever
it
is
it's
not
too
much.
It's
just
the
right
amount
of
time
that
are
willing
to
make
the
travel
and
and
go
do
it
and
Twitter
has
offered
to
post,
which
I
think
is
also
it's
like
for
us.
It's
not
too
far.
You
know-
and
this
just
seems
a
good
good
way
to
get
together.
Yeah
I
think
like
distributing
the
hosting
to
other
companies
is
creating
you
distributed
that
that,
oh,
you
mean
be
honest,
Twitter
to
know.
B
F
B
All
right,
so
the
second
item
was
I
thought
it
would
be
fun
to
just
talk
to
the
one.
Two
five
features
much
you
guys
are
probably
on
this,
but
it's
just
always
good
to
do
it
again
and
then
G
yeah
blog,
you
got
a
blog
going
I
thought
you
could
just
pull
up
the
blog
and
share
the
blog
and
then
just
walk
to
the
feature.
And
if
someone
is
in
here,
they
can
talk
enough
about
the
future.
B
G
Can
you
guys
see
my
screen
yep?
Okay,
so
this
is
the
draft
kind
of
like
I
kind
of
nagging
people
trying
to
fill
in
the
section
for
each
of
the
new
features
or
improvement.
We
have
two
in
one
Phi,
so
this
is
the
entire
dog
right
now,
I
think
I
kind
of
like
classified
the
the
new
features
into
a
bunch
of
category
like
like
cluster
operations.
G
C
B
H
We
allow
you
to
add
increase
resources
or
add
new
attributes
without
having
to
find
agent
flux
or
agent
and
hardly
deals
with
that.
It's
properly
updates
its
own
state
and
properly
mastered
its
own
state
so
that
everything
works
properly
and
you
don't
need
to
restart
agent
anymore,
so
it
is
being
requested.
Ask
from
the
community,
especially
to
change
so
we're
halfway
there
by
lowering
the
increase.
In
addition,.
H
G
Right
sounds
good
thanks
for,
though
so
yeah,
so
that's
the
under
the
cluster
operation
category
and
the
other
category
we
did
is
that
we
improve
England
fly,
is
storage,
so
I
have
how
to
ride
this
section
yet,
but
I'm
planning
to
and
I
have
a
demo
later
in
this
meeting,
so
I
can
show
you
the
future.
The
CSI
support
the
other
category
is
containerization,
so
I'm
not
sure
if
Gilbert
is
on
the
line,
but
we
did
some
change
to
the
matrix
container
riser
so
that
we
can
support
dynamically
garbage,
collect
those
unused
darker
layers.
G
G
I
Yeah,
so
this
new
API
is
sort
of
meant
to
bridge
a
gap
between
well,
let's
see
I
guess
it's.
I
J
H
L
G
Yeah
I
think
I'm,
not
super
worried,
I'll
overcome
it
issue,
because
if
you
look
at
those
assisting
you
just
in
the
unit
that
we
run
on
every
single
machine,
it's
all
over
commit
to
so
I'm.
Not
super
worried
about
that
thing.
I
think
we
can.
One
thing
we
can
do
is
like
we
reserved
a
pool
resources,
that's
dedicated
for
system
demons
and
using
and
making
sure
that
the
stand
on
container
are
using
those
resources.
That's
not
on
touching
it's
not
touching
like
task
resources
or
a
secure
resources.
G
Okay,
I
think
we
can
move
forward.
So
that's
the
the
containerization
improvement
we've
done,
England
fie,
it's
a
lot
of
feature
that
we
added
in
the
regarding,
like
a
resource
management.
We
did
a
bunch
of
improvements
for
that
too.
So
one
thing
is
the
cooler
guarantee
improvement
that
Matt
and
Ben
Maller
has
been
part
has
been
working
have
been
working
on.
Do
you
guys
want
to
talk
of
like
briefly
about
the
increment?
You
guys
did.
M
Yeah
I
mean
there's
a
bunch
of
specific
tickets
that
I
think
are
listed
in
the
actual
change
law,
but
in
terms
of
the
blog
post,
I,
just
the
highlighted
here,
because
the
user
experience
is
improved.
So
when
you
set
guarantees,
we
do
a
better
job
at
making
sure
that
you
actually
get
those
guarantees
and
we
do
a
better.
M
G
G
Ip
address
things
like
this
in
the
future,
or
devices
GPUs
whatever.
So
so.
That's
something
that
we
adding
one
file.
It's
experimental
there's
an
API
that
we
expose,
which
is
similar
to
the
framework
API
happy
who
can
ride
on
their
own
resource
providers,
but
we
do
provide
some
default
resource
provider
for
the
storage
work
yeah.
The
other
category
is
performance
improvement.
We
did
a
lot
of
performance
improvement
in
one
Phi
and
Ben,
and
do
you
want
to
talk
about
this.
N
G
E
G
E
Can
find
my
unmute
button
honestly,
it's
a
little
hard
for
me
to
say
just
because
we
didn't
have
like
a
one
for
baseline
I,
don't
think
we
put
together
even
release
notes
for
the
stuff
that
went
into
one
for
for
Windows,
but
most
of
what
we
fixed
in
1/5
is
actual
support
for
most
things
like
coming
in
no
one
coming
in
from
one
four
two
one:
five
we
had
to
make
this
agent
working
like
it
could
launch
test,
but
you
didn't
have
any
sort
of
isolation
so
sure
you
said
it's
out
at
one
CPU.
E
E
This
is
sort
of
like
what
you
have
with
C
groups
on
Linux,
it's
as
similar
to
an
API
as
you
can
get
on
Windows,
although
the
big
difference
is
that
it's
like
an
actual
hard
cap,
but
these
now
work
you
can
specify
a
limit
on
your
CPU
and
your
memory
usage
for
a
particular
task
and
that's
actually
there
and
enforced
by
the
OS.
So
that's
fixed
well
implemented
now,
also
in
1-5
we
have
the
fetcher
working.
E
That
was
just
not
part
of
the
build
on
Windows,
yet
so
the
miss,
especially,
of
course
you
all
know,
give
it
a
URI.
It
pulls
it
down.
We've
got
support
for
zip
files
and
then
to
automatically
extract
those
and
then
whatever
else
it
doesn't
extract
tarballs,
but
anything
else,
it'll
just
download.
We
would
like
to
add
like
better
support
for
extracting.
You
know
any
arbitrary
thing,
but
right
now
it
just
uses
power
shown
over
on
Linux.
We
just
also
shell
out
to
stuff
I
think
in
the
future.
E
We
should
use
something
like
a
sit
like
a
7-zip
library
to
just
extract
everything
in
process
without
having
to
shell
out
and
depend
on
Linux
stuff.
On
shell
utilities,
but
yeah
that
works,
it
works
over
HTTP,
which
was
kind
of
annoying
to
solve,
because
we
were
using
curl,
which
the
older
version
used
to
open
SSL
on
Windows,
and
while
that
works,
you
don't
have
certificate
bundles
on
Windows,
with
open
SSL,
so
we
upgraded
curl
and
now
it
does
just
work
because
it
uses
Windows,
SSL
libraries.
E
Now
that
was
a
big
thing
that
got
in
I,
don't
think
the
docker
health
checks
got
in
and
now
I'm
gonna
just
read
this
thing
in
front
of
me
to
remind
me
what
else
we
did
thanks
for
having
that
oh
yeah
long
pass
support
is
here,
I,
don't
know
if
this
was
in
1,
4
or
not,
but
I
think
some
of
it
was.
But
for
the
longest
time
when
you
ran
the
mace
was
Asian
on
Windows.
E
You
had
to
do
a
whole
bunch
of
upfront
fixes
on
your
machine
to
enable
like
Windows
long
past
before
you
had
to
go
change
a
registry
setting
or
Group
Policy
stuff
just
to
make
it
so
that
the
agent
would
work
we
fixed
that
we've
took
see,
makes
root
of
any
path
that
we're
given.
We
append
this
prepend
this
long
path,
prefix,
which
is
information
that
the
Windows
API
is
used
to
say.
E
Oh
we're,
actually
gonna
support
long
pass
here,
so
greater
than
255
characters,
and
it
just
goes
straight
through
the
API
to
NTFS,
which
has
always
supported
long
paths.
This
is
a
Windows
API
problem,
but
all
of
that
is
fixed
at
like
the
base
layer
inside
of
meso.
So
anytime
we
make
a
call
to
a
Windows,
filesystem
API.
It
gets
tossed
through
a
little
wrong
path,
abstraction
and
just
works.
So
you
don't
have
to
set
a
register
setting.
You
don't
have
to
set
a
group
policy.
The
Massa's
agent
can
just
be
installed
and
work
on
Windows.
E
Now,
so
that's
nice
I,
don't
think
we
have
installers
yet,
but
for
the
most
part
like
it's
coming
along
I
think
we
made
some
significant
progress
on
it.
The
next
step
is
coming
up.
That's
not
in
1/5,
but
is
like
features
coming
out
of
1/5
we're
working
on
is
agent
recovery
to
actually
work
literally
debugging
that
when
I
alt-tab
through
this
reading
and
health
checks
for
doctor
containers
as
well,
those
are
actually
up
for
review.
So
those
are
some
upcoming
features,
so
yeah,
that's
windows,
great.
G
Thanks
thanks
Andy,
this
is
great
yeah,
so
this
is
a
Windows
support
and
the
last
category,
it's
just
like
a
feature
like
we
just
an
improvement
on
the
replicate
log
itself.
That
gives
you
some
years,
not
on
account.
I'll,
give
you
some
context
on
this
improvement
and
that
we
did
so
so
previously
on
rapid
I
mean
first
of
all,
for
those
of
you
that
are
not
familiar.
G
So
this
improvement
actually
allow
us
to
like
a
lot
of
followers
to
do
a
catch
up
with
the
leader,
while
it's
not
eating
so
that
it
can
just
like
basically
like
just
catch
up
and
fill
those
holes
in
the
replicate
log.
If
it
could
and
once
they
become
the
leader,
it
will
take
much
less
time
to
become
the
leader
because
it's
only
to
figure
out
all
of
the
whole
inside
the
replica
logs.
G
So
that's
a
huge
improvement
to
the
replica
log
library,
I
use
by
me
so
sand
or
some
other
applications,
yeah
I'm
gonna
make
you
need
to
write
some
text
for
that
improvement.
I
think
the
documentation
has
been
published,
but
we
haven't
read
the
blog
post.
Yet,
okay,
that's
pretty
much
it
the
entire
blog
post.
So
far,
if
you
have
any
new
features
or
any
improvement,
you
want
to
that,
you
want
to
add
to
the
blog
post.
Let
me
know,
and
I'll
just
add
a
section
there.
B
B
N
Okay,
so
I
want
to
talk
about
some
of
the
performance
benchmarking
yet
so
the
background
is
that
previously
they
are
you
the
reporting
that
they
want
API
the
performance
issues
with
VI,
VI,
VI
and
specifically,
comparing
to
v-0.
It
was
much
slower,
so
we
go
ahead
and
benchmark
that
myself
and
pick
up
some
low-hanging
fruits
in
performance
improvement.
So
what
this
results
shows
that
basically
the
performance
difference
between
different
API
versions
and
different
methods
versions.
So
these
tests
are
conducted
on
my
macbook
pro
with.
N
High
73.3
gigahertz,
processor
and
16
gigabyte
of
ram,
and
we
do
the
message
with
optimization
being
a
boat.
So
I'm
here
are
the
results
which
comparing
the
performance
of
different
API,
specifically
B,
0
B,
1
and
B
1
JSON.
So
what
we
did
is
we
set
up
a
bunch
of
dummy
agents
and
tasks
with
actual
States
and
we
were
called
against
state
and
point
to
build
up
the
response.
N
B
A
N
B
A
N
1.4,
it's
also
the
same.
Gryzzl
p1
is
similar
to
be
0,
and
the
on
chase
is
much
slower
and
jump
to
the
one
where
dream
105.
So
we
did
some
performance
creaking
provements
so
and
now
the
protobuf
version
is
even
faster
than
p0,
so
30%,
faster
and
v1
JSON
is
also
recovered
much.
So
it's
only
one
point:
six,
six
Lauria
thank
returning
to
the
p0,
and
you
want
to
talk
about
the
so
yeah.
Those
are
some
low-hanging
fruits.
N
Specifically,
we
eliminated
a
lot
of
hockey
in
in
the
C++
code
and
we
use
move
semantics
to
construct
the
States.
So
that's
done
out
the
performance
improvement.
Would
that
be
one
protobuf,
and
also
for
the
p1
design
as
well
and
for
JSON?
Specifically,
we
use
em
parks,
JSON
fi
function
to
avoid
constructing
and
extra
is
an
object
and
use
that
to
use
that
function
to
directly
sterilize
C++
object
directly
to
device.
What
was.
N
B
B
N
So
this
this
is
just
this
chap
is
just
a
rehash
of
the
previous
data,
which
compares
the
performance
how
the
coupon
changes
across
different
versions
with
the
same
API.
So
for
ov0,
it's
mostly
the
same.
What
be
won't
protobuf,
we
we
make
some
improvement
in
them
in
105
and
even
for
what
they
base
onto
the
increment.
It's
even
more.
A
G
F
B
D
B
System
right
so
already,
my
guess,
this
I
think
this
is
one
of
those
things
where
you're
like
so
in
it
like
already
telling
people
like
it's,
including
the
bus
for
this
blog
post,
especially
about
JSON,
if
I
that's
actually
viable
just
for
the
matrix
community
and
then
JSON
it
by
itself,
I
think
like
making
the
blog
post
also
perforated
district
JSON
Frank
his
Bible
address
the
world.
Okay,
we
did
JSON
in
a
different
way.
We
got
these
crazy
cool
performance
to
us
like
they
don't
have
to
just
be
writing
about
Mesa
Mesa
sphere.
B
G
G
G
B
G
Yeah
there
are
some
differences
specially
like
how
do
we
translate
underscore
to
to
like,
for
example,
if
the
field
is
like
snake
case
I
think
they
translate
that
into
a
camel
case
for
the
chainsaw.
So
something
like
this
that
we
don't
do
so
there's
a
convert
Convention
for
love
that
if
the
field
is
snake
case,
the
change
song
will
be.
The
name
will
be
camel
case.
M
G
D
B
C
M
First,
one
was
that
in
the
past
used
to
be
possible
to
amass
reservations,
because
when
you
left
a
reservation
on
an
allocated,
we
didn't
realize
that
that
was
going
to
be
allocated
to
you
and
therefore
we
should
probably
take
that
into
account
for
your
quota.
So
you
could
kind
of
make
reservations,
leave
them
on.
How
did
it
to
keep
declining
them
and
keep
building
up
lots
and
lots
of
reservations?
And
then.
M
B
B
M
N
M
A
M
As
a
result,
we
sometimes
cook
you
less
heavier
than
we
really
needed
like
if
Greg
has
the
housing
CPS
reserved,
even
though
he
has
10
CPUs
of
quota,
we
shouldn't
just
subtract
thousands
reserved
things
from
the
overall
quota
pool.
We
should
just
realize
that,
okay,
it's
only
Craig's
quota,
that's
affected
by
this,
that's
been
fixed
and
the
last
one
is
when
we're
allocating
to
a
role
for
quota.
There's
resources
on
the
box
that
they
have
quota
for
those
regions
on
the
box.
They
don't
have,
for
example,
courts.
M
They
can't
even
set
quota
for
those
right,
but
there's
also
resources
like
I,
don't
know
a
cheap
use,
the
maybe
you
didn't
set
quarter
or
there's
a
question
of.
Should
we
give
you
the
resource
or
her
in
the
past?
We'd
always
gave
you
those
resources,
you
just
included
them,
but
of
course
that
might
violate
someone
else's
quota,
and
so
now
before
we
make
that
decision,
we
look
at
what
is
the
Headroom
needed
and
are
we
able
to
give
this
out
without
violating
rights?
N
G
All
right
thanks
so
I'll
start
with
that
I
don't
have
a
slides,
but
I
had
the
documentation,
so
I
can
go
through
a
documentation.
I'll
give
you
an
idea
of
what
that
is
in
case.
You
guys
don't
know
about
this
and
do
a
demo
in
5-10
minutes.
So
so
this
is
the
archetype.
So
so
the
reason
we
want
to
do
this
DSi
support
is
there's
like
a
couple
of
limitations
for
the
storage
supporting
maislos,
meaning
like
for
local
persistent
volumes.
G
That,
first
of
all,
we
don't
offer
like
logical
block
device
or
physical
block
device
directly,
and
we
don't
have
a
way
to
allow
Freeman
to
make
a
choice
on
the
disk
they
want
to
choose
like
I,
think
some
of
the
frameworks
are
very
open
about,
like
which
disk
they
want
to
pick
right
now,
there's
no
way
that
he
can
tell
would
like
which
disk
I
was
the
attributes
for
that.
There's
things
like
this:
they
can
just
be.
G
They
can
just
pick
the
disk
space
on
like,
for
example,
like
size,
which
is
not
very
reliable,
and
it's
a
big
burden
for
operator
to
config
each
agent
manually
to
to
adding
this
or
removing
disks
and
making
that
very
inflexible.
So
that's
that
part
of
the
local
versus
ammonium
that
that's
kind
of
something
that
we
want
to
improve
the
local
persistent
volume
and
we
do
have
some
limited
support
for
external
persistent
moaning
missiles.
Let's
go
through
on
dr.
G
moaning
driver
interface
through
an
isolator
in
mesas,
containerize
ER,
but
itself
has
a
lot
of
limitations,
I,
think
it's
documenting
it
somewhere
and
I
think
it's
the
main
reason.
That's
not
reliable.
For
that,
for
that
thing
is
because
the
interface
dr.
volume
driver
interface
itself
is
now
there,
but
it's
not
very
well
designed
such
that,
for
example,
like
things
like,
for
example,
some
swing
interface.
It's
not
idempotent
in
my
leek
volumes.
G
If
you
lost
the
response
from
finally
plugging
something
like
this
so
and
also
I,
think
more
importantly,
it's
an
isolator
or
it
don't
account
for
the
resources
used
by
those
external
disks.
So
also
so,
basically,
the
the
external
moaning
support
right
now,
invasive
totally
just
bypass
all
the
resource
management
part
which
is
not
something
we
want.
We
want,
makes
us
to
be
aware
of
those
resources
and
interfere,
sharing,
kula
allocation
on
those
resources.
G
So
that's
the
reason
and
the
two
motivations
that
drive
us
to
do
this
work
and
I
want
to
brief
talking
about
briefly
talk
about
si.
Si
si
si
in
an
interface
that
there's
a
collaboration
between
like
major
container
orchestration
system
like
Ruben
Aires
stalker
make
those
Cala
foundry
on
the
goal
is
trying
to
create
an
interface
so
that
the
storage
vendors
just
need
to
write
one
single
si
si
plugging
and
that
plug-in
can
work
in
any
of
those
container
orchestration
systems.
G
So
the
event:
it's
it's
a
nice,
it's
really
nice
for
vendors,
because
you
don't
need
to
duplicate
the
effort.
It's
also
good
for
us,
because
we
don't
have
that
interface.
We
need
to
create
one
and
better
to
create
some
interface
that
can
be
shared
by
most
pool
containing
orchestrators,
so
yeah.
So
so
that's
the
context.
So
so
here
is
the
architecture
how
we
implement.
This
si
si
support.
G
These
are
the
acronyms
for
those
resource
providers,
so
the
resource
provider
connect
to
either
mesas
agent
or
master
directly
depends
on
their
local
resource
provider
or
external
resource
providers,
the
local
resource
provider
always
on
connect
to
the
agent
first,
because
it's
the
resource.
It
provides
it's
local
to
that
agent
and
there
is
a
manager
inside
the
agent
to
manage
those
local
resource
providers
and
the
four
external
research
provided
appliance
to
add
the
same
manager
in
master
and
the
master
well
on.
The
manager
will
handle
those
external
resource
providers
for
storage.
G
Basically,
the
storage
local
resource
provider
will
use
G
RPC
to
talk
to
the
CSI
plugin
to
gain
necessary
information
to
send
out
resources,
for
example,
on
one
thing
that
plugin
is
to
do.
One
thing
that
the
slurp
needs
to
do
is
to
ask
that
plug-in.
Tell
me
the
capacity
of
this
storage
system
and
then
once
the
plug-in
reply
was
a
with
the
size.
The
storage,
local
resource
provider
will
basically
construct
the
resource
and
tell
the
missus
agent.
Here
are
the
new
resources
that
you
want
to
send
out
to
frameworks.
G
G
All
right
so
yeah
I
think
the
mainframe
KPI
that
we
did
is
to
add
these
two
new
types
to
disk
info
on
wainscot
block
when
it's
called
Ralph.
So
block
means
like
it's
a
blob
blob,
blob
device.
I,
don't
you
it's
like
something
that
you
cannot
assume.
There's
a
file
system
on
it
and
rel
means
this.
It's
either
capacity
round
capacity
or
a
relevant.
You
haven't
decided
whether
you
won't
want
to
use
that
as
a
mountain
warning
or
block
bounding
all
pass
volume.
G
So
these
are
the
new
types
that
we
introduced
in
one
file
at
the
frame
we
can
use
on.
So
whenever
you
have
a
storage
pool,
for
example,
lbm
Bonin
group
it
has
like
10
terabytes
of
on
disk
space
and
Dan
maces
will
send
out
a
rally
resource
with
ten
terabytes
and
the
frame
we
can
do
operations
on
that
raw
resources
to
either
convert
that
into
block
or
camera
into
mount
or
past
disks,
and
then
use
that
as
part
of
the
task
and
there's
some
additional
field.
G
We
add
into
the
disk
source
to
a
map
to
CSI
volume,
ID
and
attributes.
You're
gonna
see
that
in
the
demo.
Surely,
and
as
I
mentioned
like
route
resource
can
either
be
a
storage
pool
or
a
pre-existing
disks
and
pressing
this.
We
use
the
list
phone
in
CSI
interface
to
to
discover
those
Priuses
in
disks,
and
we
ask
some
new
operations
onto
to
the
Frank
API
to
allow
for
Emma
to
convert
around
resources
into
mammalian
or
block
volume
or
pass
volume.
So
these
are
the
the
new
Auto
operation
that
we
introduced.
G
Okay,
so
I
think
that's
pretty
much.
It
I
won't
jump
into
the
demo.
Oh
yeah
BAM,
before
I
jump
into
the
demo.
So
what
the
only
measuring
here
is
on
the
concept
of
disc
profile.
The
reason
we
want
to
introduce
this
profile
is
because
we
don't
want
that
framework
to
make
a
scheduling
decision
based
on
like
storage,
vendor
specific
parameters.
G
It
doesn't
make
sense
because,
like
if
the
the
operator
switch,
a
vendor
on
the
frame
has
to
be
chance
to
deal
with
new
set
of
vendor
specific
parameters,
which
is
not
very
extensible
so
on.
So
the
idea
here
is
trying
to
provide
an
interaction.
So
basically
profile
is
just
a
simple
string.
Just
think
about
that
as
fast
slow,
goat
and
and
you
can
map
that
string
to
a
set
of
vendor
specific
parameters
and
the
frame
can
just
make
the
decision
based
on
the
name
of
the
profile
rather
than
the
specific
vendor
specific
parameters.
G
So
that
can
come.
So
that's
how
people
can
I
abstract
away
those
specific
lower
level
details
of
the
disk
and
make
scheduling
decision
based
on
some
high
level
concept,
which
is
called
profile
here,
and
here
we
introduce
a
module
interface,
allow
you
to
customize
the
mapping
between
the
name
and
those
parameters
that
would
provide
some
default
module.
That's
very
easy
to
use
I'm
gonna
demo
that
too
all
right.
So
that's
pretty
much
yet
I
think
that's
just
jumping
to
the
demo.
Let
me
share
my
terminal.
G
All
right,
so
can
you
guys
see
my
screen
yeah
all
right,
so
so
what
I'm
gonna
do
is
on.
This
is
on
my
virtual
machine.
So
what
I'm
gonna
do
is
I'm
gonna
start,
so
these
are
command
so
I'm
going
to
start
a
master
first
on
this
or
just
a
regular
master,
I'm
just
gonna
start
a
master
first.
This
is
the
1
5
latest
one
thigh
master
make
those
and
then
I'm
gonna
start
agent.
So
this
is
the
agent
so
they're
like
some
new
flag
that
need
to
use
one.
G
G
Gonna
talk
about
that
later
because
we
can
add
the
resource
provider
dynamically
through
our
agent
Lee
API
operated
API,
and
this
is
the
profile
plugging
that
we're
gonna,
enable
which
we
ship
with
a
party
basis
open
source
and
it's
gonna,
do
the
translation
from
profile
name
to
to
a
bunch
of
vendor
specific
parameters,
and
then
these
are
model
directory,
and
then
you
have
to
enable
this
agent
feature
color
resource
provider
to
be
able
to
use
that
feature.
So
let
me
run
the
agent
right
now.
G
Ok,
so
if
you
go
to
missus
master,
you
can
see.
It's
running
agent
is
running.
Two
agents
register.
Okay,
now
what
I'm
gonna
do
is.
This
is
some
commands,
so
this
command
is
gonna
use
in
the
agent
API
I'll
create
API
to
add
a
resource
provider,
so
it
basically
take
this
JSON
blob.
Let
me
open
that
JSON
blog.
L
K
G
Yeah,
so
this
is
these:
are
the
the
JSON
plot
for
the
our
agent
operating
API
cost?
So
basically
we
have
a
new
opera,
the
API
call
add
resource
provider
config,
and
then
you
specify
the
resource
provided
info,
and
then
you
specify
the
type
on
the
name
of
the
provider,
and
then
you
can
specify
some
dynamical
reservation
fee
for
reservation.
You
want
the
resource.
You
have
it's
okay,
to
leave
that
empty.
G
G
And
then
you
just
hit
the
aging
API
using
this
to
run
it
so
I'm
gonna
just
run
that
so
I'm
just
run
that
and
you
can
see
if
it's
successful,
you
return
200,
okay
and
if
you
go
to
UI
and
go
to
agents
data,
endpoint
you're
gonna
see
that
oh
there's,
some
new
resource
being
sent
out.
It's
reserved
to
test
row
and
it's
wrong.
It
has
a
provider
ID
it's
coming
from
that
resource
provider.
G
It
has
a
profile
example
profile.
That's
that's
basically
translated
by
the
module
that
we
we
ship
with
nato's
okay.
So
that's
the
the
very
simple
on
plugging
one
will
do
next
is
add
a
new
different
plugging,
so
there's
another
CSI
plug-in
for
devices
flogging
which
basically
does
listing
basically
just
lists
all
the
roadblock
devices
on
that
box.
So
if
you
do
that.
G
You
got
200,
okay,
if
you
go
to
stay
and
just
refresh
that
you
see
on
there's
a
different
set
of
resource
being
sent
out,
which
is
the
the
relative
devices,
and
actually
this
plugin
will
set
some
the
attributes
for
for
each
roadblock
device.
Basically,
the
output
from
on
LS
block,
so
you
can
see
I
was
the
type
of
disc
with
the
size.
Well,
there's
a
rotational
SSD
or
what's
the
model
things
like
this,
though
these
are
really.
G
Okay,
so
so
so
one
last
step
what
I'm
gonna
do
is:
there's
Sun,
I'm
gonna
launch
a
framework
that
consume
those
resource.
I
think
this
frame
is
committing
some
agents
already
it's
an
example
frame
with
that
written
by
Ben
Jimmy.
So
what
does
this
frame
is
gonna
do?
Is
it
will
receive
route
resources
and
then
just
reply
with
crave,
onion
to
create
a
mount
disk
and
then
I
reserve
that
resource
mount
disk
so
that
other
frame
can
receive
those
resources.
G
So
I'm
gonna
go
ahead
and
launch
this
framework
using
maysa,
execute
all
right,
so
I
think
this
frame
should
be
running.
You
can
check
that
yeah,
so
this
frame
is
running
and
if
we
go
to
the
state
endpoint-
and
you
can
actually
see
that
the
disk
resource
we
see
earlier
on-
that's
it's
a
rail
resource.
2Gig
was
actually
translate
into
mount
like
that
framework,
and
then
these
are
the
same
mount
disk
that
other
frame
are
using.
So
basically
the
other
frame
can
just
take
that
mount
disk
and
then
launching
their
Cassandra
kafka
tasks
on
that.
G
J
G
So
that's
a
good
question,
so
we
ship
a
plugin
on.
Let
me
give
you
so
the
basically
those
plugging
just
take
a
JSON,
so
that
plug-in
just
simply
take
a
JSON,
it's
called
profile
matrix
and
then
it's
a
mapping
from
profile
name
to
a
bunch
of
volunteer
abilities
and
create
parameters.
In
this
case
the
parameter
is
zero,
because
it's
a
test
plug-in
and
then
you
can
specify
access
mode
and
mount.
G
Those
are
the
CSI
concept,
a
map,
one
two
one,
two
CSI
one
in
capabilities
and
people
can
just
adding
those
new
profiles
to
to
to
that
JSON
file
and
then
the
module
itself.
We
were
pulling
from
either
a
file
or
a
HTTP
server.
So
you
can
host
that
profile
somewhere
in
the
cluster
and
index,
for
example,
and
then
the
module
will
just
pulling
those
profile
matrix
every
and
every
ten
seconds.
For
example,
you
can
configure
that
and
then
the
profile
will
be
updated.
G
G
B
G
A
huge
thanks
to
everyone
who,
who
is
he
hoping
this
product?
It's
a
very
big
project,
the
biggest
so
far
like
at
least
my
experience
and
yeah
huge
thanks
to
those
guys
and
there's
a
lot
of
improvements
to
this
is
just
MVP
I
think
the
eventual
goal
will
get
to
external
volume
and
global
resources.
That's
the
eventual
goal
energy,
but
but
this
worked
this
some
groundwork
to
make
that
easier.
B
One
of
the
things
that
we're
thinking
about
is
having
like
an
even
lower
level
too
focused
I,
don't
know
if
you
want
to
call
it
a
track
or
meso
plumbers,
II
honey
knows
it's
right
now.
Visit
conference
is
getting
pretty
broad,
there's
like
a
lot
happening
and
the
feedback
that
I've
these
guys
from
last
conferences
is
great
bonuses.
J
B
That's
been
what
we
tried
to
do.
The
last
conference
is,
was
do
an
internal
track,
but
I
would
say
the
lower
level
track
would
be
not
just
internals,
but
also
like
things.
People
have
done
on
or
around
the
project.
So,
like
you
know,
stop
Yelp
or
Twitter.
Talking
about
some
of
the
things
that
they've
done
on
or
around
would
also
be
considered
sort
of
more
low
level.
The
high-level
stuff
would
be
things
from
say,
partners
talking
about
some
of
the
integrations
they've
done
out
or
or
some
of
the
projects
that
they
run
on
top.
B
That's
it
the
the
perhaps
the
best
way
to
describe
it
is
cuz.
There's
like
a
Linux
plumbers
call
just
more
like
what
I
think
all
you
guys
would
go
to
if
you
went
to.
If
you
wanted
to
go
to
labelings,
Kong
kind
of
thing
and
then
there's
like
let's
stop
and
we're
trying
to
get
feedback
on
whether
or
not
we
think
makes
sense
to
to
be
more
explicit
about
separation.
The
answer
could
be,
we
don't
suffer
now
if
anyone
wants
to
share
and
you
can
even
email
here,
you
know
you
know
the
answer.
B
Could
video
I
know
it's
a
good
balance
or
the
answer
could
be
yeah
I
thought
the
same
thing
and
I
even
heard
feedback
and
certain
people
at
the
conferences
or
the
answer
can
be
act.
I
think
we
should
do
even
less
internally
track
stuff
and
doing
the
narrow
stuff
I.
Just
love
to
hear
for
those
feedback
would
much
Karen
sounds
great
email,
snail,
mail.