►
From YouTube: Centaurus Monthly TSC Meeting 7/27/2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
that
we
can
later
share
this
on
the
on
the
website
today,
I
think
we
have
a
pretty
packed
schedule.
First,
we
are
very
happy
to
have
alice
slash
from
vmware
to
come
over
to
talk
about
his
declarative
resource
class
management.
Work
scheduler
actually
is
the
core
of
class
management,
and
it's
also
a
key
component
in
the
asymptotes.
A
His
work
is
very
different.
It's
very
innovative
than
the
current
current
way
will
be
on
the
schedule,
so
we're
happy
to
have
this
part
and
then
secondly,
the
polar
we
will
discuss
whether
we
include
the
polaris
cloud
into
military
rifle.
This
walker
is
a
joint
worker
between
qa
and
the
feature
way,
the
last
one
I
think
I
need
and
the
roof.
I
will
give
some
update
on
the
community
outreach
so
without
further
ado,
let's
go
to
the
first
one.
B
Yeah
just
quickly,
you
know
before
it
starts.
Let
me
just
give
a
background,
so
so
just
to
kind
of
yeah,
so
lalit
is
from
vmware
research
lab.
So
what
happened
was
just
to
set
the
background
lalith
actually
reached
out
to
us,
and
our
team
did
similar
kind
of
work.
You
know
the
scheduler
work,
scheduler
called
the
firmament,
so
it
was
also
a
declarative
way
of
doing
scheduling
as
well.
B
So
while
we
were
talking
about
it
and
then
I
brought
up
centaurus
and
then
as
I've,
you
know
just
a
brief
discussion.
I
had
with
lonnie.
This
sounded
really
very
interesting,
so
so
the
four
movement
was
the
declarative
scheduler
as
well,
but
the
programming
model
was
very,
very
cumbersome.
Actually,
so
I
think
so,
the
more
I
look
at
it.
I
went
through
the
paper
as
well.
This
looks
pretty
interesting.
Actually
so.
C
B
Based
without
writing
any
code
you
can,
if
you're
familiar
with
the
kubernetes,
you
have
to
write,
tons
and
tons
of
code.
You
know
the
the
scheduling
code,
you
know,
and
so
I
think
so
this.
So
that's
what
the
the
background
is
so
I'll.
Let
you
you
want
to
go
ahead.
Yeah.
A
Oh
just
one
sentence:
do
we
still
have
people
on
the
firmament
of
to
maintain
hey
so.
B
No,
so
that's
unfortunate.
Actually
so
we
haven't
had
any
activity
repo,
so
we've
been
asked
by
the
cncf
guys
to
retire
the
project.
Actually
I
see
I
said
so.
If
you
folks
think
you
know,
if
there's
you
know,
I
mean
we
don't
have
the
developers
anymore.
You
know
to
work
on
that
project,
but
if
you
think
you're
you're,
you
know
somebody
from
kicktalk
or
somebody
else
would
be
interested.
It
would
be
good
to
have
that
project
active
actually.
So.
A
B
A
A
D
Building
and
use
schedulers
with
this
tool,
instead,
all
right,
so
so.
The
the
the
problem
statement
here
is
basically
that,
no
matter
what
you're
running
in
your
data
centers
right,
whether
it's
virtual
machines
or
pods
or
containers
lambdas,
what
have
you
you're
going
to
solve
the
same
kinds
of
problems
over
and
over
again,
you'll
basically
have
tons
of
these
cluster
managers
involved
and
their
job
is
basically
to
take
these
workloads
assign
them.
D
Your
cluster
management
logic
to
look
like
using
a
high
level
declarative
language
like
sql
right
in
some
sense,
you
say
I
want
my
state
to
look
a
certain
way
and
how
you
get
there
is
handle
entirely
by
our
framework
right.
So
I'm
teaching
to
the
choir
here
right,
like
you,
guys,
have
tried
to
build
a
public
cloud
platform
and
you've
looked
at
things
like
the
vanilla,
kubernetes,
scheduler
and
you've
seen
just
how
hard
it
is
to
get
right
so
in
something
like
the
kubernetes
scheduler.
D
The
problem
is
you
assign
pods
to
nodes,
and
these
parts
come
configured
with
all
kinds
of
requirements
right.
You
have
resource
requirements,
affinities,
anti-affinities
and
whatnot.
It's
like
30
different,
hard
and
soft
constraints
that
you
have
now.
The
scheduler's
job
is
to
basically
assign
pods
to
nodes
such
that
all
these
hard
and
soft
constraints
satisfied,
and
you
find
some
kind
of
high
quality
placements.
Basically-
and
this
is
the
the
algorithmic
problem
involved
here-
is
it's
a
combinatorial,
optimization
problem,
it's
np
hard
in
the
general
case.
D
It
there's
no
silver
bullet
here
right,
so
you
can
imagine
that
writing
code
to
solve
this
problem
is
going
to
be
quite
hard
as
well
right.
So
if
we
peek
under
the
covers
to
look
at
how
systems
like
the
kubernetes
scheduler
are
built
today-
and
this
is
just
one
example
out
of
many
right
like
it-
doesn't
even
matter
that
it's
a
kubernetes
scheduler,
it's
the
same
problem-
whether
it's
vm
vmware's.
E
D
But
if
you
look
at
how
these
systems
are
built
today,
you'll
find
that
it's
basically
a
collection
of
these
purpose-built
best
effort
heuristics
like
a
web
of
them,
and
you
basically
have
to
get
this
combination
of
them
to
work
a
certain
way
to
satisfy
all
these
constraints
and
that
gets
kind
of
hairy.
So
in
kubernetes
in
particular,
I
won't
go
into
all
the
details,
but
you
you
basically
start
off
with
initial
placement.
D
The
scheduler
has
to
it,
takes
in
one
pod
from
a
priority
queue,
and
then
it
has
to
look
at
all
the
nodes,
so
the
logic
executes
one
node
at
a
time.
So
it's
always
comparing
one
part
and
one
node
to
see
whether
it
can
satisfy
hard
constraints,
a
so-called
filter
pass,
and
then
it
has
to
do
a
so-called
scoring
pass,
which
is
which
evaluates
soft
constraints
and
assigns
a
score
to
each
node
right.
D
So
all
these
policies
like
the
in
kubernetes,
they
call
it
predicates
and
priorities
I'll
call
it
hard
and
soft
constraints.
But
all
these
hard
and
soft
constraints,
beyond
the
simplest
cases
right
the
moment
you
have
some
kind
of
global
reasoning,
like
interpod
affinity
and
affinity,
you'll,
find
that
it's
very
hard
to
write
code
for
this
you're,
basically
having
to
write
code.
That
makes
a
local
decision
about
one
part
and
one
node,
and
you
want
those
decisions
to
be
consistent
with
some
kind
of
global
property
you're
trying
to
maintain.
So
what
ends
up
happening?
D
Is
you
start
piling
on
all
these
ad
hoc
data
structures,
your
caches,
your
indexes
and
so
on,
to
efficiently
implement
this
logic?
And
the
moment
you
know
you
want
to
add
some
new
kind
of
policy.
This
starts
to
break
right
or
you
want
to
change
the
policy
like
if
you
want
to
make
interpod
affinity
work
at
the
level
of
the
the
number
of
pods
per
node,
suddenly
a
lot
of
the
existing
optimizations
break.
So
this
starts
to
get
very
fragile
and
the
worst
part
is
that
it's
not
even
the
full
story.
D
D
D
Now
the
thing
is
all
of
these
four
things:
I've
described
here
are
actually
the
same
algorithmic
problem
right
in
one
case
you
assign
pods.
In
other
case
you
remove
pods
and
allow
some
new
assignments
in
some
case.
Like
you
know,
in
some
cases,
you're
assigning
groups
of
pods
they're
all
actually
the
same
problem
with
slightly
different
constraints,
and
yet
there
are
four
completely
different
blobs
of
code
to
do
this
in
the
ecosystem
and
they're
kind
of
duplicating
the
same
effort
right.
E
D
Wheel
reinvention,
I
would
say
right
so
in
in
the
aggregate.
What
we
observed
is
that
this
pattern
of
building
you
know
this
kind
of
hard,
combinatorial,
optimization
engines
to
do
scheduling,
policy
based
load,
balancing,
etc,
runs
into
a
lot
of
problems.
First
of
all,
it's
hard
to
make
them
scale
and
perform
well,
especially
as
the
policies
and
constraints
get
more
challenging,
and
then
they
don't.
They
usually
sacrifice
decision
quality
to
scale
right.
They
use
some
approximation.
They
basically
sample
the
set
of
nodes.
Things
like
that,
so
forget
optimal
solutions.
D
And
lastly,
the
extensibility
thing
is
hardly
alien
to
you,
guys,
it's
hard
to
add
new
policies
to
these
systems
and
second,
like
adding
new
features
like
completely
changing
the
scheduling
game
to,
for
example,
making
it
multi-data
center
aware,
or
something
like
this
right
like
when
you
start
to
add,
like
new
dimensions
to
the
problem.
Suddenly,
a
lot
of
this
code
has
to
be
redone,
which
gets
pretty
nasty
so
with
no
further
ado
I'll
describe
how
our
solution
works.
D
So
it's
called
dcm
declarative
cluster
manager
to
go
back
to
our
running
example
of
the
kubernetes
scheduler.
To
do
the
same
thing
with
dcm,
what
you
would
do
is
you
would
first
represent
your
cluster
state
in
a
relational
database
right.
So
this
does
not
need
to
be
of
like
this
does
not
need
to
be
something
full
flesh
like
mysql
or
postgres.
It
can
also
be
an
in-memory
embedded
sql
database.
D
That
is
just
a
cache
of
some
other
data
store
the
as
long
as
you
can
sort
of
query
it
relationally
over
something
like
jdbc,
you're,
good
right
doesn't
matter
where
that
database
is,
but
you
basically
have
a
relational
database
with
the
schema
representing
your
cluster
state.
So
now
you
have
tables
of
your
pods,
your
nodes,
and
you
know
other
metadata
that
you
might
have
for
your
scheduler.
D
So
in
our
kubernetes
scheduler
implementation,
we
have
28
different
tables
representing
all
the
metadata,
that's
relevant
in
kubernetes,
for
scheduling
now
against
this
cluster
state.
What
you
do
is
you
describe
all
your
policies
and
constraints
as
constraints
in
sql
I'll
show
you
how
that
looks
like
in
a
moment
but
you're.
Basically,
writing
queries
that
specify
some
conditions
that
have
to
hold
true,
for
you
know
the
state
right.
D
You
can
write
both
hard
and
soft
constraints
in
this
mode,
but
the
cool
thing
is
once
you
have
this
declarative
specification
of
what
your
status
in
the
schema
and
what
the
constraints
are
in
sql.
It
turns
out
that
there's
enough
information
for
our
engine
dcm
to
basically
do
everything
that
you
were
writing
these
ad
hoc
heuristics
for
before
right
and
at
one
at
run
time.
What
dc
will
do
is
it
will
pull
in
only
what
the
stated
needs
from
the
database?
D
It
will
efficiently
encode
it
due
to
an
optimization
problem
that
it
then
uses
in
off
the
shelf
state
the
state-of-the-art
constraint
solver
to
solve
it's
one
from
google.
They
use
it
in
production.
So
it's
good
enough
for
others.
I
would
imagine
right
and
then
you
basically
wire
the
results
back
to
the
calling
code,
and
you
can
use
this
mechanism
to
do
all
kinds
of
cluster
management
tasks
that
involve
this
optimization.
B
Quick
quick
question,
so
you
mentioned
google
users
that
the
the
constraint
optimization
algorithm.
Do
they
use
that
as
part
of
pork
or
do
you
know
I.
D
Don't
know
if
they
use
it
and
work,
but
it's
certainly
used
for
other
kinds
of
problems
at
google.
It's
called
google
or
tools
and
that's
like
they
have
a
dedicated
team
to
build
the
the
guys
building
that
stuff
are
sort
of
kind
of
legendary
in
the
constrained
optimization
world
and
they
basically
use
that
thing
for
everything
from
routing
to
all
kinds
of
you
know,
logistical
problems,
etc.
So.
D
Good
clarification
thanks
for
asking
that
all
right,
so
now
I'll
again,
this
is
all
a
bit
up
in
the
air.
I'll
show
you
specifics
of
how
this
programming
model
looks
like
next
right,
but
before
that,
like
some,
some
more
you
know,
builds
just
to
show
that
this
is
something
we're
building
seriously
right.
We've
actually
tried
this
out
in
several
use
cases,
not
just
a
kubernetes
scheduler,
but
also
like
a
vm
load
balancing
tool
in
the
context
of
a
vmware
use
case,
as
well
as
a
distributed.
D
Transactional
data
store
the
code
food
database
and
across
these
cases
we
can
show
you
some
wins.
Basically
you'll
see
more
details
in
the
paper,
but
we
can.
We
are
actually
faster
than
the
baseline
kubernetes
scheduler,
at
least
at
500
node
scales,
where
we've
tested
it
and
it's
a
and
keep
in
mind
that
the
default
scheduler
in
kubernetes
is
looking
at
half
the
nodes
in
the
cluster
per
decision.
D
Dcm
is
looking
at
all
of
them
and
since
the
publication
of
the
paper
we've
gotten
like
roughly
two
times
faster
compared
to
where
dcm
is
and
over
here
we
are
already
like.
You
know
2x
faster
than
the
baseline
scheduler,
bringing
up
pods
end
to
end,
so
you
can
throw
in
a
database
and
a
constraint
solver
and
we're
still
faster
than
very
hand.
D
Optimized
go
code
that
the
kubernetes
scheduler
has
and
then,
of
course,
like
these
other
parts
of
our
design
translate
into
things
like
better
decision
quality,
which
means
you
know
by
using
a
constraint
solver,
we
get
better
load,
balancing
faster
preemption
things
like
that
right
and
for
extensibility,
like
adding
new
kinds
of
policies
to
our
the
systems.
We've
built
is
usually
a
matter
of
adding
less
than
20
lines
of
sql
and
even
like
very
non-trivial
feature
changes.
D
Making
a
kubernetes
scheduler,
for
example,
not
just
reason
about
pods,
but
also
vms,
that
are
running
in
the
same
like
vms,
with
crds,
basically
in
vmware's,
kubernetes,
distribution
and
reasoning
about
them
in
a
unified
way
is
just
you
know,
a
few
changes
to
the
schema
and
a
couple
of
constraints
that
should
apply
to
vms,
but
not
pods.
So
it's
quite
easy
to
do
it
once
everything
is
specified,
declarative.
D
If
I
understand
correctly
cube,
word
was
for
that's
a
data
plane,
consideration
right,
like
cubot
was
for
making
vms
look
like
pods
and
running
them.
That's
what
cube
word
is
right,
but
this
was
not.
A
Done
in
the
context
of
keyboard
yeah
in
kubernetes,
the
vm
also
modeled,
as
crd
a
new
object,
side
side.
D
Okay,
so
that's
the
pitch,
let's
get
into
specifics
right.
So
the
way
your
approach
you
know
using
dcm
in
any
use
case
is
to
first
represent
your
cluster
state
with
some
schema,
so
it'll
look
different
from
use
case
to
use
case,
but
for
kubernetes
it's
obviously
going
to
be
like
tables
of
codes.
You'll
have
tables
of
nodes,
pods
have
to
be
assigned
to
nodes,
so
no
there
will
be
a
node
column
in
the
pods
table.
D
There'll
be
a
foreign
key
relationship
just
like
the
basic
sql
stuff
that
you
have
to
do
right,
but
now,
because
you
want
dcm
to
figure
out
how
to
assign
these
balls
to
nodes
for
us
we're
going
to
tag
the
node
column
as
being
a
variable
column,
all
right.
So
what
this
tells
dcm
is
a
look
at
the
schema.
It
knows
immediately
that,
because
of
that
foreign
key
relationship
that
the
cells
here
can
only
draw
values
from
this
set
here
right,
it
knows
what
possible
assignments
are
now
to
make
this
primitive
useful.
D
You,
let's
take
the
simplest
possible
constraint
here:
right,
don't
assign
pods
to
nodes
that
are
under
that
report
to
be
under
high
memory
overload
right.
So
the
way
you
write
constraints
in
dcm,
this
syntax
has
changed
a
little
bit
like
it's
now
called
create
constraint.
You'll
find
that
when
you
look
at
the
github
repo,
but
you
basically
specify
hard
constraints
as
some
selection
of
rows,
and
then
you
specify
some
predicate
that
should
hold
true
for
each
of
those
rows
right,
pretty
straightforward,
so
you
can
basically
write
in
this
example.
D
Select
star
from
pawns
all
your
pods
and
you
check.
This
is
the
hard
constraint
that
pods
dot
node
is
always
in
some
set
of
nodes
right
here
I
can
do
some
kind
of
filtering
if
you
want
now,
of
course,
in
practice
this
will
get
more
complicated,
but
you
have
the
full
arsenal
of
sql's
expressiveness
available
to
you.
You
can
do
joints,
you
can
do
aggregates.
D
You
can
do
group
bys,
you
can
do
correlated
sub
queries,
you
can
do
arrays,
there's
a
lot
of,
and
we
also
give
you
a
lot
of
custom
aggregate
functions
to
express
your
constraints.
Soft
constraints
are
the
same.
You
basically
write
a
view
where
now
you
give
some
expressions
that
you
have
to
maximize.
So
it's
similar
to
hard
constraints,
but
instead
of
check,
you
basically
write
maximize
and
you
give
an
expression.
So
you
can
do
this.
D
You
can
use
this
to
do
things
like
load
balancing,
for
example,
you
can
compute
some
join
that
calculates
the
spare
memory
capacity
per
node.
Now
this
itself
is
a
variable
because
the
value
of
that
will
depend
on
the
assignment
of
positive
nodes,
but
you
don't
need
to
take
care
of
that.
You
declare
it
and
dcm
will
deal
with
all
the
nitty-gritties
for
you,
but
then
you
can
say
you
know
I
want
to
maximize
the
minimum
spare
capacity
per
node
and
that
will
cause
dcm
to
find
allocations
that
spread
out
your
bonds
right.
D
The
way
it
looked
like
when
you're
actually
writing
java
code
is
that
you
instantiate
these
little
things
called
models
and
a
model
is
parameterized
by
a
connection
to
a
database
from
which
dcm
will
fetch
the
schema
and
the
list
of
constraints
which
are
strings
in
sql
one
per
constraint
right
and
from
this.
Basically
dcm
will,
you
know,
instantiate
a
model,
it
will
do
some
code
generation
compiling
under
the
covers
and
we'll
give
you
back
this
model
and
you
can
think
of
this
model
as
encapsulating.
D
Basically
one
optimization
problem,
you're
trying
to
solve,
say,
initial
placement
or
preemption
or
bad
scheduling,
whatever
right
and
then
at
runtime.
Every
time
you
call
model.solve
what
happens
is
dcm
will
pull
in
all
the
state?
It
needs
from
these
tables,
and
it
will
do
so
so
yeah
it'll,
pull
in
all
the
tables
and
then
it'll.
You
know,
use
use
a
constraint,
solver
and
it'll
output,
basically
the
same
tables
but
with
values
assigned
to
the
variable
columns.
D
Okay.
So
now
this
is
how
you
basically
get
your
scheduling
decisions
and
if
you
don't
find
a
solution,
basically
dcm
will
give
you
an
exception.
It
will
tell
you
it
was
unsatisfiable
and
it
will
tell
you
why.
It'll
tell
you
we'll
give
you
a
list
of
constraints
that
were
unsatisfied
and
we
are
also
adding
support
to
not
only
give
you
the
set
of
constraints
that
were
unsatisfied,
but
the
set
of
rows
that
led
the
set
of
rows
from
the
tables
that
led
to
the
unset
right.
H
Yeah
right:
okay,
sorry,
just
quick
clarification,
clarification
so
with
the
decent
money
run
time
early
for
every
part.
To
be
scared,
you
don't
need
to
access
the
database
or
everything
would
be
determined
inside
the
design
program.
D
It
basically
so
the
working
model
is
like
it
pulls
in
what
it
needs
over
jdbc
right,
and
so
you
do
this
for
a
batch
of
pods
at
a
time.
That's
usually
the
expected
model
right,
so
it
runs.
So
basically
all
your
constraints
are
accessing
tables
right.
So
dcm
knows
the
set
of
tables
and
views
you
need
to
access
from
the
database
and
once
per
like
every
time,
you
call
model
up
solve
it'll
fetch
it'll,
do
a
series
of
sql
queries
to
get
everything
it
needs
and
then
it
outputs
a
decision.
H
Exactly
so
so
that
means
all
the
calculation
to
make
the
assignment
is
down
at
the
db
engine.
D
H
Now
this
is
basically
the
first
question
I
have
is.
Basically,
I
want
to
get
very
like
answer
to
see
all
the
scheduling
decisions
which
is
assignment
was
done.
All
the
calculation
was
done
as
a
database
engine
or
the
dcm
binary
had
logic
inside
it
to
decide
it.
D
The
dc
solver
will
make
the
final
call
on
the
assignments.
That's
where
the
constraint
solving
is
happening,
but
you
can
get
a
lot.
You
can
do
a
lot
of
the
pre-computation
to
do
that
in
the
database.
For
example,
if
you
want
to
compute
affinities
and
anti-affinities
right
like
if
you
want
to
have
that
kind
of
policy,
you
can
basically
compute
from
the
database
a
table
that
says
this
part
should
be
repelled
by
these
nodes.
H
Yes,
so
the
following
question
for
this
first
one
thanks
for
clarifying
that
would
be
like
right
now,
at
least
in
kubernetes.
The
all
the
data
was
re
was
stored
in
the
hcd
right
for
me.
Yeah
and
the
node
can
come
and
go
and
go
like
node
can
fail.
New
node
can
be
added
so.
E
D
It's
the
same
as
anything
you
do
in
kubernetes
I'll
show
you
give
me
two
slides
yeah
yeah.
B
One
question
I
have
the
song,
so
I
I
want
to
go
back
to
the
format
solver.
We
had
solver
there
as
well.
It
was
a
crossover
you
know
and
it
was
pretty
dark,
basically
so
down
in
the
sense
you
have
nodes
and
you
have
arcs
and
it
can
you
go
from
left
to
right?
Okay,
so
so
you
have
to
pretty
much,
you
know,
define
all
the
nodes
and
arcs
and
upfront
the
whole
structure,
xor
or,
and
all
that
kind
of
thing.
It's
a
pretty
complex
program
yeah.
B
So
in
your
case
you're
saying
their
complex,
optimization
engine
algorithm
is
not
sql
based.
How
much
of
an
effort
was
that?
Because
you
said
you
bridged
that
gap
between
sql
and
one
side
and
the
optimizer
and
solver.
E
B
D
The
thing
is
like
compared
to
the
firmament:
what
it
is
it's
actually
not
a
solver
in
the
sense
that
you
know
the
constraint.
Satisfaction
community
would
call
something
a
solver
like
that's,
really
just
an
algorithm
and
you're
kind
of
coercing
everything
to
work
on
that
algorithm.
Yes,
the
solver
here
is
like
a
true
constraint:
solver
very
fancy
technology
like
it's
really
a
very.
D
Exactly
yeah,
it's
it's
very
hostile
to
expressing
anything
complex
because
you
have
to
worry
about
the
graph
here.
You
kind
of
really
declare
what
you
want
to
the
solver,
but
even
in
that
declaration
there's
a
there's
a
there's.
Some
there
are
a
lot
of
subtleties
involved
which
we
take
care
of.
A
D
Indeed,
right
so
the
first
thing
we
were
using
was
this
thing
called
mini
zinc,
which
was
easy
for
us
to
get
started
like
get
the
project
started,
but
it
was
simply
not
like
it's
it's
extremely
slow.
It's
a
high
level
modeling
language
which
will
then
compile
down
to
something
that
talks
to
different
solvers,
and
that
thing
was
just
extreme
like
I
couldn't
even
get
it
to
work
for
a
40
30
node
cluster
like
it
was
really
that
I.
D
A
D
D
So
I
just
want
to
be
cognizant
of
time.
I
get
like
deeper
yeah.
A
You
probably
have
like
45
to
50
minutes
yeah
because
is
very
important.
C
Yeah,
do
you
think
we
could
you
know
have
I
can
talk
like
before
the
last
one,
so
you
guys
can
take
as
long
as
you
want.
How
is
that.
I
C
B
D
Yeah
so
back
to
the
programming
mode
right,
so
I
mentioned
you
instantiate
these
little
models
and
the
idea
is
like
you,
talk
to
the
same
database
and
the
same
schema,
but
you
instantiate
different
models
that
cover
different
kinds
of
cluster
management
tasks
that
can
operate
at
different
time
states.
D
So,
for
example,
you
can
have
one
for
initial
placement,
which
really
only
looks
at
the
set
of
pending
pods,
and
it
does
a
lot
of
pre-computing
in
the
database
to
compute
how
those
pods
should
relate
to
other
things
in
the
cluster
right.
But
you
can
only
you
know
you
can
give
a
fairly
small
problem
to
the
solver
to
deal
with
and
if
that
fails,
you
can
basically
fall
back
to
a
preemption
model
that
basically
brings
in
even
more
pods
to
look
at
in
the
system
and
in
the
pms
model.
D
You
can
have
a
few
additional
constraints.
That
say
you
know
what
what
pods
are
allowed
to
be.
You
know
kicked
out
what
are
not
allowed
to
be
kicked
out,
and
similarly
descheduling
is
almost
the
same
as
preemption,
but
if
you're
not
assigning
anything
new
you're,
only
removing
things
from
the
cluster
to
improve
some
kind
of
utility
right
and
now.
One
thing
I
want
to
mention,
like
I
said
before-
is
that
the
part
that
dcm
will
take
care
of
you
and
I'm
happy
to
go
into
any
level
of
detail.
D
You
want
on
this
right,
but
basically
we
use
a
thing
called
a
cp
solver,
that's
what
this
google
or
tools
is,
and
the
performance
of
a
solver
is
quite
sensitive
to
how
we
present
problems
to
it
right,
and
this
is
entirely
the
the
smarts
that
have
to
go
into
the
compiler
to
get
this
working.
But
basically
it
involves
making
sure
that,
once
we
are
translating
from
sql
to
the
constraint
solver,
we
basically
remove
as
like.
D
D
And
the
second
is
this
thing
called
global
constraint,
which
is
something
peculiar
to
cp
solvers,
where
there
are
some
patterns
of
constraints
for
which
the
solvers
implement
specialized
algorithms
to
solve.
One
classical
example
is
the
so-called
all
different.
You
have
10
variables
and
you
want
all
of
them
to
be
a
different
value.
B
D
Happens
at
both
levels,
so
this,
like
you
can
like,
I
said
I'll,
show
you
in
a
moment
how
this
will
look
like,
but
you
can
do
a
lot
of
that.
Precomputing
in
the
database,
for
example,
okay,
computing,
the
spare
capacity
per
node
kubernetes,
doesn't
tell
you
that
of
the
api
like,
for
example,
how
many
like.
D
So
when
you
assign
positive
nodes,
the
pods
might
have
you
know,
cpu
requests
limits,
and
these
things
right.
How
much
spare
capacity
you
have
on
the
nodes
is
not
something
kubernetes
will
tell
you.
You
have
to
do
the
join
yourself.
D
The
kubernetes
scheduler
does
this
today
as
well,
so
that
kind
of
joins
and
all
those
expensive
aggregations.
You
can
do
a
lot
of
that
in
the
database,
but
even
the
way
the
constraints
are
specified,
for
example,
for
gang
scheduling.
You
would
need
to
say
you
need
a
group
by
say
groups
of
pods
and
say
either
all
of
them
are
assigned,
or
none
of
them
are
assigned
that
group
by.
We
have
to
still
do
at
our
end
right
because
it's
needed
for
the
constraint
specification
yeah.
D
D
So
now
the
question
that
we
got
earlier,
how
does
it
look
like
when
you're
actually
building
something
for
you?
So
the
use
case
for
the
kubernetes
scheduler
will
look
like
this,
so
you
still
have
to
do
what
the
baseline
scheduler
does
in
terms
of
subscribing
to
events
from
the
kubernetes
api
right.
So
you
have
informers.
Subscribing
to
you
know,
notifications
about
your
pods,
your
nodes
and
other
data
types
that
are
relevant
to
scheduling
like,
for
example,
the
disruption
budgets.
All
of
those
things
right
now.
D
The
differences
between
doing
it
without
dcm,
vanilla,
scheduler
and
with
dcm,
are
like
this.
The
vanilla
scheduler
will
have
a
lot
of
state
described
in
custom
data
structures
in
dcm.
A
lot
of
this
we
basically
manage
in
the
scheduler
itself,
so
we
don't
use
a
database
sitting
away
from
the
scheduler.
We
use
an
embedded
in-memory
sql
database
called
h2.
We
are
now
replacing
this
with
a
different
thing
to
do
incremental
computation
but
I'll.
We
can
talk
about
that
afterwards.
So.
B
All
the
nodes
that
the
cluster
state
you're
gonna
replicate
so
currently
the
the
the
question
which
union
had
that
state
is
in
that
cd
itself.
You
see
so
instead
of
that.
D
F
D
But
right
now
most
in
almost
all
of
our
use
cases
we,
like
you,
know
people
don't
use
a
database
of
you
know
an
sql
database.
So
we
might.
We
just
use
an
in-memory
cache
of
that
state
using
something
like
h2.
H
Yeah
and
just
a
quick
question,
another
question
the
quick
comment
on
here
as
well.
The
reason
I'm
trying
to
clarify
this
synchronization
with
the
ecd
with
the
design
database
is
everything
even
is
in
memory,
ddb
everything
you
need
to
consistently
write
to
it.
Some
locking
mechanism
has
to
be
applied.
That
could
therefore
affect
your
performance
when
you're
reading
it
from
dcm
to
get
a
resolver
down.
H
D
This
is
a
good
question,
so,
right
now
the
database
has
hardly
not
hardly
been
a
bottleneck.
Sorry,
I'm
going
to
walk
while
I
just
let
my
cat
out
she's
screaming,
but
yeah
so
right
now,
the
database
end
has
not
been
a
problem
for
us
and
we
expect
to
run
into
these
issues
later
right.
Now
we're
translating
into
a
sort
of
changing
this
into
an
incremental
computation
engine
called
ddlog,
where
we
don't
anticipate
that
these
problems
will
be
an
issue,
but
we'll
have
to
get
there
to
find
out
so.
D
H
That's
probably
the
case
like
in
the
database.
It's
a
larger
table.
You
have
the
the
more
I
mean
the
longer
it's
running,
it's
more
problematic.
You
probably
introduced.
I
mean
you
probably
have
like
indexing
all
kinds
of.
D
Stuff
yeah
exactly
so,
we
need
to
so
the
thing
with
the
like.
The
reason
I
mentioned
dd
log,
this
incremental
engine
is
that
you
basically
introduce
like
it
computes
on
deltas,
so
normal
relational
database
gives
you
like
tables
and
you
create
queries
and
you
get
output
tables
in
ddlog,
you
get
deltas,
so
you
you
insert
changes
to
the
database
and
it
will
compute
the
changes
made
to
the
output
tables
right.
So
it's
the
amount
of
work
the
database
is
doing.
Per
update
is
proportional
to
the
size
of
the
update,
which
is
pretty
small.
B
B
B
J
Well,
we
I
mean
this
is
this,
is
the
this?
Is
the
area
we
can
take
another
half
a
day
of
discussion,
we're
constantly
looking
at
optimizing
sql
in
in
many
different
ways,
but
again
our
use
cases
is
somewhat.
I
mean
our
typical
use
case
is
somewhat
different.
J
J
If
you
don't
have
again,
if
you
don't
have
any
hiccups
on
the
10
000
nodes,
whatever
database,
whatever
sql
engine
you
use,
that's,
I
would
say
good
enough
for
like
a
majority
of
use
cases,
maybe
outside
of
you
know
some
google
like
scales,
so
I
I
do
have
a
sense
that
you
know
as
like
mentioned,
that
sql
is
not
a
bottle
neck
here.
D
Yeah,
it's
more
of
like
not
having
native
incremental,
like
sort
of
incremental
computation,
that's
issue
for
us.
There
are
actually.
D
J
We've
done
a
very
similar
work
with
the
with
not
replacing,
but
basically,
I
think
well
with
replacing
it's
such
it's
a
d
with
ignite
and
we've
done
actually
these
deltas
already.
So
you
could,
you
could
might
as
well
look
at
it
but
again.
My
point
is
that
just
an
interesting
kind
of
inefficiency,
if
you
don't
have
the
performance
problem
on
that
scale,
and
I
think
it's
something
that
is.
D
Exactly
yeah
later
yeah
yeah,
the
the
getting
deltas
is
actually
so.
This
is
partly
what
I'm
looking
at
with
atinagaras
this
summer
and
going
forward
is
like,
if
you
know
the
exact
deltas,
you
can
actually
do
certain
kinds
of
scheduling
decisions
even
more
efficiently,
like
global
decision
making.
You
can
do
that.
The
deltas
will
tell
you
what
subset
of
the
state
to
pull
in
from
the
database,
and
you
can
actually
do
fairly.
B
This
is
very
important,
though,
because
you
know
when
you're
dealing
with
at
the
scale
at
that
scale,
sdn
controller
we're
using
actually
ignite
to
build
our
next
generation,
sdn
controller,
so
same
issue.
You
know,
sdn
controller,
you
need
to
know
each
and
every
networking
node
you
know
going
up
and
down
and
all
that,
so
the
incremental
update
becomes
a
real.
You
know
important
and
kind
of
a
bottleneck
as
well
so
yeah.
J
D
So
what
we
have
to
do
with
dd
log
is
like
because
I
really
want
to
stick
to
sql,
so
I
don't
have
to
rewrite
dcm.
I've
like
me
and
a
colleague
we
added
like
an
sql
front,
end
to
talk
to
dd
log,
and
it's
funny.
You
mentioned
sdn
controllers,
because
the
first
use
case
that
my
colleagues
apply
ddlog
to
us
network
controllers,
yeah.
B
D
From
the
industry,
okay,
okay,
so
just
to
like
wrap
up.
This
is
my
last
slide
for
now,
like
I
said
so,
this
is
the
state
management
distinction
right,
custom
data
structures
versus
you
know
a
relational
view
of
the
same
state
and
then,
instead
of
reasoning
about
one
part,
one
node
at
a
time
in
the
vanilla
scheduler,
you
basically
end
up
with
multi-part
multi-nodes.
You
know
reasoning
at
a
time
right,
like
the
query,
will
doesn't
really
care.
D
How
many
the
query
you
write
the
queries
in
a
way
that
you
don't
care
how
many
things
are
in
each
of
them
pretty
much,
and
instead
of
writing
all
your
policies
as
these
very
fragile
filter
score
heuristics,
you
basically
write
them
as
constraints
in
a
fully
declarative
way.
Right.
You
write,
you
know
hard
and
soft
constraints.
D
You
know
you
can
turn
a
hard
constraint
on
a
soft
constraint
by
just
changing
one
clause
with
some.
You
know
restrictions
pretty
much
but
yeah.
You
basically
tell
you,
you
basically
specify
what
you
want.
The
cluster
management
logic
to
look
like
and
we'll
take
care
of.
You
know
all
the
nitty
gritties
of
dealing
with
the
constraints
over
for
you,
so
I'll
stop
here.
This
is
the
overview.
D
The
tool
is
open
source.
It's
actively
maintained,
okay,
these
things,
I
think
we
can.
This
might
be
interesting.
There's
one
part
I'll
also
mention
so
here's
one
split
that
we
discussed
right
like
how
do
you
split
functionality
between
the
database
and
the
solver?
So
the
idea
is
to
basically
use
the
database
for
its
strength
right.
D
If
you
already
know
that
you
prefer
some
kinds
of
nodes
over
others
sure
you
know
have
a
table
that
says
you
know
like
some
kind
of
priority
ranking
or
some
kind
of
score
like
do
all
of
that
early
on.
If
you
want
to,
there
are
one
very
good
thing
you
can
do
with
the
databases
to
compute
all
kinds
of
expensive
aggregates.
Like
I
mentioned,
the
spare
capacity
per
node
is
one
example,
but
also
like
you
know,
mappings
of
pods
to
nodes
or
pods
to
pods
that
are
mutually
affine
or
ntf.
D
You
can
compute
all
of
that
in
the
database
quite
efficiently
and
then
on
the
solver
side.
You
basically
just
evaluate
your
constraints
right.
This
itself
also
has
its
own
query
evaluation
to
do,
but
we
basically
do
that
on
our
end,
but
that's
pretty
much
it
yeah
I'll.
Take
any
questions.
If
you
have
it,
there's.
E
D
A
L
Anybody
questions
yeah
thanks
thanks,
like
I
think
it's,
it's
a
really
good
presentation
and
a
really
good
effort.
What
you've
done
there
thanks
a
lot
and
one
question:
what's
the
the
bootstrapping
like
time
and
effort,
because
if
I
understood
it
correctly,
you
keep
everything
in
memory.
So
if
let's
say
we
want
to
change
the
scheduler,
like
you
know,
on
the
one
scheduler
fails
and
it
needs
to
go
on
the
on
the
other
node.
L
So
I
would
need
to
kind
of
restart
everything
right,
because
I
I
lose
all
the
in
our
information.
So
can
you
comment
on
that?
A
little
bit
this.
D
Was
this
has
not
been
an
issue
in
my
testing
so
far
again,
it
depends
on
the
scales
you're
targeting
obviously
right,
but
like
the
way
we
wrote
the
kubernetes
scheduler,
it's
the
same
as
how
the
vanilla
scheduler
works.
It's
soft
state
right.
So
when
you
bring
it
up
you
basically,
you
know
you
the
moment
you
register
to
the
informers.
You
basically
get
a
set
of
you
know
like
you,
basically
get
the
current
state
writing
to
the
database
has
been.
You
know,
that's
hardly
been
a
bottleneck
for
us
right
yeah.
It.
B
L
Yeah
yeah,
of
course,
of
course,
no,
I,
I
wasn't
sure
you
know
what's
maintaining
the
state,
but
if
you're
kind
of
using
informers
and
if
you're
using.
D
B
Just
another
thing:
you
know
these
guys
they
do
a
lot
of
caching.
You
know
the
the
scheduler,
the
the
kubernetes
kubernetes
default
scheduler,
so
they
have
their
caching.
You
know
they
read
format
cd,
but
in
your
case
essentially
you're
saying
you
or
your
java
code.
For
example,
the
generated
java
code
goes
to
this
in
memory
database
to
do
whatever
it
needs
to
do.
Basically
exactly.
M
D
I
mean
I
I'm
not
a
big
fan
of
the
database.
We've
been
using.
Thus
far.
It's
called
h2,
it's
like
even
its
optimizer
is
pretty
crappy.
If
you
ask
me
it,
but
it's
been
good
enough
so
far,
but
it's
one
of
the
reasons
like
we
really
want
to
migrate
to
dd
log
because
it
can
do
incremental
processing
and
we
know
it's
much
faster.
So
there
are
some
silly
overheads
with
h2
which,
like
I'm
not
a
fan
of,
but
like.
D
I'm
going
to
try
to
avoid
h2
altogether,
basically
right,
like
I'm,
going
to
make
a
dd-log
program,
look
like
a
in
memory
database,
basically
or
so
you're,
going
to
build
a
new
in-memory
structure.
Basically,
that's
what
you're
saying
exactly
exactly
yeah
so.
B
D
Basically,
so
we
already
actually
have
it.
So
if
you
look
at
the
different
dd
log
repository
you'll
see
an
sql
folder
and
like
a
colleague
of
mine,
and
I
wrote
that
so
it
basically
takes
your.
So
maybe
I
wonder
if
I
should
show
you
some
code,
so
basically
the
kubernetes
scheduler
I
have
to
write,
has
a
schema
right.
It
has
a
set
of
tables.
It
has
a
set
of
views.
D
You
basically
give
that
same
schema
to
this
dd
log
with
sql
front
end
that
we
have
and
it
will
generate
materialized
views
for
all
those
tables
and
we'll
basically
translate
all
of
that
into
data
log
it'll
set
up
that
in-memory
engine
you
do
writes
into
it
again.
It
looks
like
a
sql
database.
You
do
writes
into
it,
and
it'll
output
like
it'll,
basically
update
materialized
views
that
you
can
do
select
star
or
point
queries
too,
and
we
also
have
deltas
coming
out
of
it.
That's
basically
what
we're
building
yeah.
D
So
it's
basically
much
more
it'll,
hopefully
be
a
much
more
efficient
alternative
to
h2,
and
you
don't
have
to
worry
about
writing
your
own,
like
setting
up
your
own
indexes
and
things
like
that.
It
will
take
care
of
it
for
you
really.
B
D
So
ignite,
like
keep
in
mind,
already,
has
some
something
like
so,
as
nikita
mentioned,
I
was
looking
at
their
documentation
as
well.
I
think
yeah
nikita.
You
should
probably
clarify
all
this,
but
again,
as
he
mentioned,
like
deltas,
are
really
out
not
specified
under
sql
right,
like
you
have
to
go
out
of
band
for
it.
D
So
there
are
some
limitations,
but
I
think
dd
log
strength
is
that
it
can
do
this
for
even
recursive
queries
if
it
wanted
to.
So
that's
why
it's
very
useful
for
sdn
controllers,
like
it's,
the
set
of
queries
that
it
can
incrementally
compute
is
much
broader.
I
would
imagine.
D
G
A
Thank
you
very
much,
thank
you.
I
guess
deepak
newman
and
me
and
I
we
can
probably
have
some
follow-up.
B
Yes,
yes,
afterwards,
a
lot
of
things
we've
done.
This
looks
very
interesting.
The
reason
I
say
it's
very
interesting
to
me,
because
we
tried
doing
the
same
thing
in
pharma
man.
The
programming
model.
When
you
define
your
part,
affinity,
anti-affinity
logic
it
becomes,
you
need
to
be
a
real
hardcore
graph
programmer,
basically,
which
is
regular
programmer,
is
not
a
graph.
You
know
the
data
graph
programmer,
you
see
it
becomes
very
complex.
B
D
Can
show
it
like
if
you
have
just
one
second,
I
can
quickly
show
you
how
tiny
the
like
affinity.
Anti-Affinity
rules
are
just
give
me
one
quick.
Second,.
B
B
Up
doing
sequentially
because
we
couldn't
know
we
didn't
know
how
to
do
it
in
craft.
Basically,.
D
D
A
D
This
is
very
good.
This
is
very
good,
so
please
have
a
look,
there's
a
tutorial,
but
I'm
happy
to
follow
up
on
these
things.
Thank.
A
You
thank
you.
We're
definitely
looking
into
more
detail
is
the
paper
of
the
guitar
project.
Thank
you,
yeah.
Okay,
any
rupaul,
I
think
indexing
is
yours
thanks.
Thank
you.
How.
C
Yeah,
okay,
so
yes.
J
N
Okay,
we
just
dropped
off
so
and
you
can
continue.
C
Okay,
so
all
right
so
for
the
upcoming
you
know
few
months,
we're
gonna
have
a
lot
of
events
like
because,
obviously
for
q1
q2
because
of
the
pandemic,
so
we're
kind
of
limited
in
our
outreach
activity,
but
but
for
q
you
know
q3
q4,
a
lot
of
physical
events
are
happening
and
we
want
to
make
sure
that
we're
taking
advantage
of
these
events
to
help
promote
centaurus
and
recruit
more
partners.
C
So
there
are
four
events
that
are
happening.
I'd
like
to
go
through
with
you
number
one
event
I
mentioned
this
previously
we
are
actually
click
to
cloud.
Is
sponsoring
this?
This
we'll
call
it
the
meetup,
it's
actually
a
little
mini
conference
targeting
500
plus
developers
in
india,
and
it's
going
to
be
august
in
u.s
time.
It's
august,
13,
9
p.m,
and
it's
india
time
will
be
august.
14.,
oh
yeah,
that's
great
yeah!
C
C
You
know,
please
let
me
and
rupaut
know,
and
we
will
send
you
down
in
information,
so
we
have
yeah
thanks
to
click
to
cloud
again
and
they
have
done
tremendous
job
on
you
know,
setting
our
website
and
then
we're
gonna
start
promoting
it,
and
then
it
looks
like
we're.
Gonna
have
a
you
know,
alibaba
cloud
speaker
we,
the
last
I
heard
they're
gonna,
have
a
microsoft
speaker,
maybe,
and
then
the
ministry
of
education,
india
might
even
might
come
as
well
and
then
also
the
ceo
of
softbank,
so
yeah.
C
So
yes,
click
to
cloud
has
a
lot
of
outreach,
so
they
are
inviting
a
ton
of
people
coming,
and
I
think
this
is
gonna
be
great.
Initially
we're
just
gonna
do
a
meetup,
but
it
turns
out
to
be.
This
is
almost
like
a
virtual
event.
So
this
is
awesome.
So
thanks
again
to
click
to
cloud
and
thank
you
speakers,
you
know
who
are
participating
and
originally
we
had
a
schedule
in
july,
but
you
know
we
figure
it's
best
to
push
it
out.
C
So
now
is
august,
13,
u.s
time,
9
p.m,
and
so
that's
it,
and
so
next
event
is
oss
summit.
This
is
a
linux
foundation.
Yeah.
If
you
can
google
it
shawnee
it
will
the
whoever
is
sharing
the
screen.
If
you
can
show
that
it'll
be
great.
If
not
that's
fine
too,
so,
linux
foundation
generally
has
like
three
oss
open
source
summit
event
every
year,
us
europe
and
china.
In
this
year
they
combined
the
europe
european
one
with
the
u.s
one
due
to
the
pandemic.
C
So
this
is
happening
september
27
through
september
30.,
and
it's
going
to
be
based
in
seattle,
and
so
so
we
are
lucky.
You
know
we
actually
get.
We
got
a
platinum
booth.
So
what
happens
you
know
while
we
purchase
a
platinum
sponsorship,
but
they
can't
send
their
people
out
to
us
to
attend
the
event,
so
they
gave
it
to
future
way.
So
we
have
the
whole
entire
platinum
booth,
which
is
a
pretty
good
sized
booth
so,
and
also
we're
going
to
have
one
hour
tutorial.
C
So
rupa
and
dr
sean
will
be
speaking
that
about
centaurus
at
that
one
hour
tutorial
and
if
you
are
going
to
oss
summit,
please
check
that
out
and
then
at
the
booth
we're
going
to
show
centaurus
project
details
and
that
will
and
then
also
you
know,
start
working
with
prospective
partners
and
hopefully
we'll
get
more
partners
and
developers
to
join
our
project.
C
So
that's
oss
summit
and
after
that,
there's
a
coup
com,
u.s
event
in
la
october,
12th
october,
13-15
and
futureway
has
a
silver
sponsorship,
and
so
we
are
going
to
also
showcase.
You
know,
cube
edge.
You
know
some
of
the
kubernetes.
You
know
cnc
projects
that
we
visually
worked
on.
In
addition,
we'll
also
show
centaurus,
because
you
know
the
cncf
audience
right
would
be
community
would
be
also
our
target
and
also
whatever
you
know,
centaurus
works
with
kubernetes.
So
it
works
very
well.
C
So
this
conference
would
work
really
well
for
us
to
promote
centaurus,
and
so
that's
that
and
just
before
the
kubecon
us
on
october,
11th
october
12th
in
case
you're,
not
you
know,
aware:
there's
also
linux
foundation,
networking
and
edge
summit
happening
in
the
same
event.
So
these
two
events-
lf
networking
edge
and
coupon
us
they're
happening
back
to
back
at
the
same
venue
in
la
so
you
know
unless
there's
a
problem
with
a
pandemic
situation,
I
would
expect
a
ton
of
attendees,
because
you
know
a
lot
of
people.
C
You
know
open
source
people,
they
love
to
you
know,
get
together,
see
each
other.
This
is
a
penned
up
demand
so
yeah.
Hopefully,
if,
if
you
guys
will
be
there,
maybe
you
know,
let
me
know
I
can
host
a
dinner
or
something
we
can
get
together.
Finally
meet
in
person.
That
would
be
really
fun.
Oh,
going
back
to
oss
summit
just
want
to
let
you
know
because
of
hawaii
sponsorship.
C
We
have
a
one
conference
room,
one
day
conference
room,
so
so
we're
planning
to
do
a
cio
round
table
using
our
conference
door
so
half
day
for
that
conference.
Room-
and
this
is,
you-
know,
future
ways-
private
room
we
have
because
of
the
platinum
sponsorship.
So
in
the
morning
we
are
going
to
do
a
roundtable
cio
we're
going
to
invite
you
know
whoever
wants
to
come.
C
If
you
anybody
see
level
who
wants
to
come,
let
me
know
I
like
to
invite
them
and
then
I
can
also
give
that
person
a
conference
pass
to.
You
know
attend.
C
So
you
know
the
sea
levels
we're
going
to
talk
about
the
future
of
5g
edge,
ai
and
cloud
computing,
and
we're
going
to
do
that,
roundtable,
the
actually
on
the
29th
september,
29th,
oh
no
28th,
20,
29th,
yes,
and
then
in
the
morning
in
the
afternoon,
we're
going
to
do
a
centaurus,
deep
dive
so
and
dr
sean's
one
hour
around
tutorial
is
at
11
20,
11
30,
something
like
that
right
after
the
keynote
and
he's
gonna
announce
this
deep
dive
on
september
29th
so
september,
29th,
our
private
conference
in
the
morning
we're
gonna
have
a
cio
round
table
and
in
the
afternoon
we're
gonna
have
a
deep
dive
session.
C
So
any
of
you
guys
want
to
attend
any
of
this.
Please
let
me
know-
and
I
can
you
know,
and
then
I
can
give
you
the
information
to
attend
that
event,
so
so
yeah
so
coming
up.
We
have
four
opportunities
to
meet
with
prospective
partners
meet
with
developers.
C
C
Yes,
stefan
do
you
guys
have
plan
to
come
to
us
for
os
for
any
of
these
u.s
events.
L
Yeah,
that's
that's
also
something
that
we're
kind
of
also
worried
about
and
we're
still.
We
were
on
the
fence
and
now
we're
kind
of
leaning
more
towards
like
no,
and
we
also
submitted
actually
a
paper
or
a
talk
for
the
oss
and
it's
currently
in
the
what's.
It
called
on
the
waiting
list.
But
even
if
we,
if,
if
it
gets
accepted
kind
of,
we
are
planning
to
do
virtual
presentation
there
so
yeah
most
likely,
we
will
not
be
coming.
C
Okay,
great,
so
that's
all
I
have
thank
you
very
much.
C
A
Okay,
if
not
debugger,
I
guess
we
do
not
have
time
today.
For
your,
you
know,
that's
fine,
and
also
I
also
we
do
not
have
enough
tlc
member
today
to
vote
yeah.
P
B
K
N
Time
we
discussed
about
cube
edge
integration
with
centurious,
so
is
there
any
progress
on
that
area?.
B
B
Oh
yeah
yeah
yeah,
so
that's
been
going
on
yeah.
That
work
is
going
on
actually
yeah,
so
we
have
a
project
that
we
call.
I
forgot
the
four
four
for
train.
I
I
forgot
the
name
in
any
case
yeah,
so
we're
doing
a
lot
of
major
work.
Actually
that
and
then
once
the
tfc
meeting
one
of
the
psc
meeting
couple
of
months
ago,
we
presented
the
whole
design
and
all
that
you
know
how
are
we
taking
a
forking
cube
edge
and
extending
the
whole
federation-
and
you
know
the
east
west.
N
I
remember
that
and
after
that
I
think
we
discussed
about
some
of
the
designing
spec
that
how
we
are
going.
A
Yeah
well,
I
can
share
the
ui
with
I
wanted
to
share
screen,
but
due
to
some
permission
issues
my
zone
now
cannot
share
the
screen.
I
can
send.
N
Time
dr
shawn
asked
us
to
like
pick
up
some
of
the
modules
for
cubase
and
see
how
we
can
integrate
those.
If
team
have
already
started,
probably
we
can
assign
our
current
team,
along
with
them,
to
see
how
that
integration
is
happening.
A
For
you're
talking
about
the
for
the.
A
B
B
A
And
yeah
the
problem
and
also
deepak
any
for
the
run
table
event
in
the
code.
Codefest,
probably,
we
need
to
exchange
some
email
to
discuss
what
exact
topics
we
want
to
cover.
J
C
N
Yeah
set
up
some
of
the
questions
and
we'll
sync
up
before
probably
next
week,
between
deepak,
annie
and
shawnee.
Oh.
N
Is
15
then
doctor
shong
is
20
minutes
and
then
your
the
panel
discussion
between
three
of
you
will
be
almost
20
minutes
along
with
a
q
a
and
there
will
be
like
three
sessions,
the
back
to
back
that
with
nikita
stefan
and
no,
I
always
said.
C
C
Can
spend
five
minutes
on
tsc
right
operation
like
we
have.
You
know
like
a
monthly
call
and
then
this
and
then
show
people
where
you
know
like
if
they
want
to
come.
These
are
the
you
know
on
dial
in
and
then
this
is
where
they
can
find
many
minutes.
So
it
would
be
good
for
sean.
You
talk
five
minutes
in
depot.
Maybe
talk
another
like
15
minutes.
N
Or
10
10
minutes
there
will
be
one
more
speaker
from
microsoft:
he's
alex
research,
scientist
at
microsoft
right
and
then
one
more
from
alibaba
as
well
she's,
like
vice
president
of
alibaba
cloud,
so
four
key
note
session.
One
panel
discussion
and
three
sessions
will
be
will
be
the
in
that
particular
agenda
of.
C
Right,
so
what
I'm
saying
is,
instead
of
making
a
panel,
maybe
we
can't
have
a
three
short
presentations,
because
you
know
we
really
want
shawnee
to
present
right
sure
we
don't
want
to
make
it
like
just
a
q,
a
I
mean
it's
more
of
a
presentation.
I'm.
B
No,
I
think
andy
is
right.
Actually,
I
think
that
the
presentation
of
shawnee
doing
the
tsc
presentation
and
I'm
doing
the
deep.
I
think
that
would
be
much
more
powerful
than
powerful.
E
C
N
A
C
C
I
can't
host
this
with
20
minutes,
so
I
can
say
shawnee,
you
can
you
know
I'll,
introduce
him
and
then
he
can
present
tsc
operation.
You
know
overview
like
five
minutes
and
deepak
can
have
another
10
minutes
to
do
a
deep
dive,
centaurus
deep
dive
a
little
bit
more
and
then
I
will
leave
five
minutes
to
just.
N
Good
I'll
update
I'll,
not
change
anything
on
the
website.
I
mean
we'll
keep
in
the
same
way,
but
the
same
that
you
mentioned,
and
it
will
follow
that
and.
C
C
E
A
I
will
just
prepare
a
slight
some
slice
talk
about
introducing
a
tc
and
the
four
six.
We
have
our
fox
error
and
how
it
works.
I
think.
C
B
N
C
B
A
A
very
brief
overview
of
the
four
segments
yeah
you
just
yeah
yeah.
Then
you
can
talk
more
details
about
the
project.
Yeah.
B
C
E
Thanks
thanks
and
thanks,
I
think
we
definitely
there's.
B
A
lot
of
interest
in
your
work.
This
sounds
very
interesting
because
I
can
tell
you
that,
because
this
whole
declarative
thing
you
know
we
looked
at
it
from
four
moment
standpoint
as
well,
but
there
were
some
issues
in
the
whole
complexity
of
programming
model.
So
I
know
shawning
has
been
looking
at
it
as
well.
He's
very
interesting.
A
So
we
are
building
a
brand
new
scheduler
for
the
for
the
ice
vm
scheduling
we
are
we're
building
the
following
the
traditional
way,
the
heuristic
approach,
but
my
team
also
has
a
responsibility
to
take
to
look
into
innovative
research
in
this
area.
So
I'm
very
interested.
D
B
B
B
L
B
Is
a
researcher
and
he's
a
postdoc
at
vienna
university
of
vienna?
Oh
okay,
nice,
nice,
nice,
yeah.