►
From YouTube: SIG - Performance and scale 2022-01-06
Description
Meeting Notes: https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.yg3v8z8nkdcg
A
Okay,
welcome
everybody
to
six
scales.
It's
january
6.,
it's
actually
2022..
The
notes
are
in
the
chat.
B
B
And
I
was
talking
to
the
community
in
the
community
meeting
last
year
and
they
told
me
on
version
49.
This
is
gonna,
be
part
of
it.
Can
someone
confirm
that
and
when
49
gonna
be
released,
sure.
C
So
49
got
released
with
the
release
candidate
yesterday
it
will
get
released
officially
most
likely
next
wednesday,
so
the
12th,
the
virtual
machine,
pools
implementation
exists
today.
There's
some
changes
occurring
with
it,
though
in
the
future
to
be
aware
of
so
one
change.
I'm
working
on
is
that
the
hash
algorithm
that's
used
to
determine
when
pools
should
be
updated,
actually
update
their
their
virtual
reach
student
instances,
and
things
like
that.
I
need
to
change
that
because
I
made
an
error
in
my
calculation
there.
C
C
B
C
B
B
Okay,
okay,
you're
gonna
see
there.
I
have
another
suggestion
here
and
I
would
like
to
be
clarified.
Are
you
aware
about
instant
clone
from
from
vmware.
B
I'm
sorry
what
the
clone
stand
clone
the
the
way
that
vmware
do
the
the
cloning.
Okay,
I
put
on
the
chat
window,
a
pdf
public
pdf,
how
it
works.
B
If
you
go
on
the
page
five,
I
would
like
to
to
to
talk
to
you
instead
call
there
exactly
how
it
it
works.
It
is
into
you
on
on
paragraph
two,
you
instant
code
use
copy
on
write
from
memory
and
disk
management,
and
this
is
very
very
fast.
Can
you
explain
what
is
happening
with
the
pool
today
on
convert.
C
Because,
certainly
yeah,
so
this
isn't
something
q
vert
specific.
This
is
the
storage
back
band.
If
you
are
using
something
like
ceph
or
cluster
with.
What's
essentially,
we
call
the
feature,
smart
cloning.
So
when
you
create
a
golden
image
of
your
disk,
let's
say
it's
a
windows,
virtu
disk,
and
then
you
want
to
start
a
virtual
machine
and
you
defined
that
golden
image
as
your
source
for
your
data
volume
and
you
create
a
pvc
off
that
what's
happening
in
the
back
end.
C
If
everything
is
set
correctly
on
your
storage
provider,
is
that
it's
creating
a
really
quick,
smart
clone
of
that
that
initial
pvc
and
then
any
rights
that
are
occurring
are
that's
the
only
thing,
that's
being
changed,
that's
the
only
thing.
That's
that's
happening
is
rights,
so
it's
storing
the
delta
between
that
original
golden
image
you
had
and
the
changes
that
are
per
vm.
B
B
Duplication,
the
duplication
we
was
thinking
to
use
a
solution
that
is
called
vgo
that
was
doing
that
on
top
of
gloucester,
but
as
recommended
by
the
community,
we
moved
to
saf
with
rook
and
the
duplication
can
decrease.
The
amount
of
storage
did
up
to
85
percent,
because
at
least
that
amount
is
the
same
in
all
the
clones
you
understand
of
windows.
So
this
is
a.
B
That
I
send
on
on
the
chat
window
yeah
that
one
exactly
can
you
go,
go
down.
B
Oh
no,
they
all
the
explanation
is
is
up.
The
first
line
has
a
link.
Then
you
need
to
click.
That
explains
everything
that
I
need
here.
Initial
implementation
found
here.
I
would,
can
you
go
down.
B
B
C
I
would
use
opportunistic,
so
what
would
happen
is
somebody
would
log
off
of
their
actively
running
virtual
machine
instance
and
then.
C
That
vm
starts
it
would
use
the
new,
updated
image
so
you're
updating
that
virtual
machine
when
it's
convenient
to
do
so,
and
that
means
when
the
virtual
machine
is
offline
and
being
restarted
yeah
that
does
not
exist
yet
by
the
way
that
feature
we.
C
To
to
do
this,
but
the
api
and
everything
hasn't
been
implemented
for
managing
this
update
strategy
yeah.
We
only
have
proactive
to
make
it
clear.
The
proactive
only
is.
B
C
B
C
B
Would
be
great
and
also
the
missing
part,
since
we
have
hundred
clusters
that
has
each
one
up
to
1250
nodes,
we
plan
to
have
a
central
repository
of
the
templates
for
all
the
clusters.
C
B
Here
sure,
what's
the
question
just
maybe
clarify,
though
they
you
install
the
the
the
files
on
the
same
cluster,
but
we
need
to
point
to
a
center
repository
that
all
the
clusters
can
get
the
same
file
got
it.
Okay,.
C
B
C
The
central
repo
or
whatever,
can
be
accessed
via
something
like
https.
Then
it
could
potentially
work.
C
If
the
repo
exposes
these
golden
images
as
files
that
could
be
downloaded
over
https,
then
you
can
access
it
from
any
cluster
and
you
would
just
put
that
as
the
source
of
the
storage
and
your
data
volume
template
the
problem
with
that
is
that
you
won't
get
smart
cloning
with
that
you're
doing
a
complete
copy.
B
B
C
No,
each
pool
would
be
identical
replicas.
So
if
you
wanted
multiple
flavors,
it
would
be
a
pool
per
flavor.
B
But
there
is
a
way
to
because
let
me
explain
why
can
I
share
my
screen.
B
Yeah
I
can
share
now
yeah.
Okay,
let
me
explain
you
my
pain.
B
This
is
the
kubernetes
cluster
that
I
have
convert
on
it
for
every
10,
000,
concurrent
users.
I
have
two
domain
controllers,
because
this
is
a
problem
of
activity
directory
handle
only
5
000
each
domain
controller.
For
you
understand,
but
since
every
cluster
can
we
offer
12
flavors
of
desktops,
the
cluster
gonna
have
between
157
to
1000.
B
1250
nodes,
because
for
you
understand,
if
all
the
the
vms
use
type
1,
I
divide
10
000
by
64..
If
I
have
all
the
users
using
type
4,
I
need
to
have
all
the
users.
This
is
why
157
is
enough
if
everybody
is
type
1,
but
since
this
is
a
mix
of
of
the
m's
in
the
same
cluster,
these
need
to
be
dynamic,
and
I
can
go
up
to
only
10
000,
because
the
activity
directory
the
limitation
there
a
way
before
we
control
that
across
all
the
pools
in
the
same
cluster.
A
Yeah,
I
think
you
need
to-
I
think
you
need
to
so
one
of
the
design
of
the
intentions
was
that
the
object
encapsulates
similar
vms
and
and
so
that,
if
you
it's
another
thing
like
here,
you
know
what
your
similar
vm
count
will
be
and
you're
just
going
to
move
that
up
and
down.
So
I
think
what
you
need
to
do
is
to
have
at
least
for
now,
a
controller
that
manages
this.
A
We
had
talked
about
doing
something
like
I
mean
we
called
it
a
fleet
which
is
kind
of
another
abstraction
on
top
of
this,
but
it's
not
something
that
at
least
is
in
scope
for
for
doing
pools.
So
this
is
something
that
I
think
that
you'd
have
to
have
a
controller
that
that
deals
with
this
kind
of
moves,
the
pools
up
and
down
based
on
demand
and
the
flavors
you
need
at.
B
B
C
B
A
A
We
do
some
things
with
labeling
kind
of
with
the
scale
down
kind
of
with
this
in
mind,
so
maybe
like
as
part
of
your
as
part
of
your,
I
don't
know
your
inventory,
the
number
users
that
are
active-
maybe
you
use
some
labeling
to
differentiate
the
ones
that
are
running
that
are
there
for
preformed
and
the
ones
that
are
actually
actively
being
used.
A
This
is
not
ready
yet
in
the
code.
Well,
this
is
I
I
like
david
was
saying
I
I
don't.
This
isn't
necessarily
something
that
I
think
has
to
do
with
the
pools
like
the
vm
pools,
because
kind
of
what
you're
asking
would
mean
that
the
vm
pool
object
would
have
to
know
when
one
of
your
users
is
using
the
vm
and
that's
that's
something
that
maybe
is
a
little
bit
outside.
I
mean,
I
think
it's
a
little
bit
outside
of
cuba.
It's
a
layer
above.
B
Okay,
no
problem
just
for
know
where
I
need
to
touch
to
make
it
happen
for
us.
Okay,.
A
Yeah
yeah-
and
I
would
say
like
in
andre
like
and
what
I
want
to
reiterate,
it's
like
that
label.
The
reason
I'm
recommending
is
because
that's
that's
exactly
where
I
think
we're
like
that's
where
we're
going
with
the
scaling
strategy
like
the
with:
where
do
we
have
it?
The
selection
part
where
I
don't
remember:
what's
it
called
david?
It's
the
I'm
not
saying
again
you
talking
about
the
the
order
we
use
the
labels,
yeah
we'd
use
the
ordering.
Is
it
selection
policy?
A
Maybe
I
forgot
oh
here
it
is
like
we
use
order
of
policies
like
so,
basically
what
we
want
to
do
is
we
eventually
when
we
do
when
you
want
to
do
scale
and
andre
like
you,
you
don't
want
to
scale
in
like.
If
you
want
to
lower
the
number
of
vms
you
have
you
don't
want
to
kick
users
off
right,
so
use
the
labels
as
a
way
to
to
avoid
kicking
active
users
off
like.
A
B
Yeah
and
the
last
question
is
about
live
migration.
Are
you
working
already
with
live
migration
with
gpus
on
it?.
C
But
you're
saying
when
we're
gracefully,
taking
a
note
down
so
draining
a
node
one
of
our
mechanisms
today
is
that
we
have
the
ability
to
live,
migrate,
virtual
machines
off
the
note,
that's
being
drained,
so
they
stay
alive
after
the
note
disappears.
I
don't
remember
the
states
of
whether
we
can
migrate
gpus
right
now
or
not.
That's
something
that
I
keep
hearing.
B
C
Works
because
there's
state
and
the
gpu
like
the
memory
of
the
gpu,
I
don't
think,
is
synced
during
the
migration.
I
don't
know
how
that
would
work
right
now.
I
know
it
seems
like
it
would
be
possible,
but
I
haven't
heard
that
working
yet
I
could
be
wrong.
That
might
be
something
for
the
mailing
list.
If
yeah
I.
B
B
C
B
That's
what's
what
I
need
to
know
we're
gonna,
for
you
know
we
are
putting
production
these
as
soon
as
possible
and
we're
gonna
have
one
million
concurrent
users
on
the
solution.
Nice
cool.
I
have
done
the
interview
with
cncf
about
the
community
and
is
helping
us
a
lot
for
you
know:
okay,.
B
C
C
A
Yeah
andre
we
have,
I
I'm
trying
to
find.
We
wrote
some
here
they
are.
I
think
these
were
the
some
of
the
extra
items
that
we
still
have
for
vm
pool
I'll
I'll
copy
them
up
here.
Some
work
items
that
were
and
they're
outstanding.
These
are
things
that
you,
like
you
mentioned.
If
I
think
we
know
these
aren't
implemented,
yet
that
we
want
to
do
that.
If
you
have
resources.
B
B
A
Thanks
andre
okay
david,
I
saw
your
note:
do
you
have
to
drop
now
or.
C
Well,
I've
got
to
prepare
for
another
meeting.
I
wanted
to
say
something
real,
quick
about
the
pools,
because
it's
something
that
kind
of
happened
late
last
year
before
we
had
another
meeting
I
discovered
so
we
had
that
hash.
Algorithm
changes
bullet
point
in
there
that
you'd
put.
I
yeah,
I
discovered
an
issue
with
how
I
was
calculating
whether
a
virtual
machine
instance
needs
to
be
updated
or
not.
That
introduces
some
risk.
So
I'm
I'm
making
some
changes.
They
won't
exactly
be
backwards
compatible.
So
the
api
won't
change.
C
So
your
api,
if
you
use
a
virtual
machine
pool,
you
can
continue
using
it,
but
the
behavior
of
how
the
proactive
update
will
change
and
the
risk
here
is.
If
you
adopt
virtual
machine
pools
today,
when
you
update
to
some
future
cuvvert
release,
they
are
all
going
to
update
all
the
virtual
machines
are
going
to
be
restarted,
whether
something
actually
got
updated
or
not,
and
that's
because
I
need
to
change
this
half
algorithm
so
that
won't
occur
in
the
future
I'll
create
a
bug
for
that.
C
I
haven't
done
it
yet,
but
I've
already
started
working
on
the
code
and
I'm
hoping
that
I
can
have
it
out
in
the
next
week
or
so
I'm
kind
of
slammed
with
a
lot
of
different
projects
right
now,
but
I
definitely
need
to
get
this
out
and
it's
actually
pretty
involved
it's
complicated
stuff.
So
this
is
a
big
code,
wise,
a
big
change.
It
requires
a
lot
of
testing,
but
it
will
make
it
better
it'll
be
much
better
when
I'm
done.
A
C
Before
yes,
since
I
probably
need
to
prepare
for
that
presentation,
is
there
any
topic
that
we
could
jump
to
that?
I
need
to
be
involved
with
directly
or.
A
I
can
I
can
talk
these
two
at
a
high
level.
Maybe
you
can
tell
me
so
like
the
the.
The
only
thing
I
wanted
to
do
with
the
performance
was
well
supposed
to
go
through
them.
I
mean
I
think
we
can.
I
mean
we
can
take
this
one.
We
can
do
this
offline.
A
I
think
the
only
note
I
saw
was
that
it
we
partially
worked,
because
I
I
think
I
saw
like
I
posted
in
chat
yeah
and
then
this
is
back
degree
we're
oh
we're
over
100
or
right
around
100,
now
we're
down
again.
So
I'm
not
sure
why.
A
C
A
Okay,
yeah,
that's
that
was
that
one,
the
other
one
was
I
was
kind
of.
Marcelo
is
going
to
be
here
because
I
really
want
marcel,
and
I
talked
for
a
while
about
tests.
We
want
to
do
in
the
load,
generator
and
kind
of
the
importance
of
them
and
kind
of
how
we
go
through
them.
We
came
up
with
two
from
last
time,
but
we
can.
A
C
A
A
A
Okay,
yeah,
okay,
all
right,
okay,
the
other
other
thing
I
want
to
open
more
some
of
those
documents.
A
Marcelo
actually
pointed
to
a
few
interesting
things
that
I
really
want
to
look
at
in
the
coming
few
weeks,
or
at
least
talk
about
like
this.
One
was
kind
of
really
interesting
to
me,
because
so
I
saw
so
marcel
found.
The
kubernetes
did
some
work
measuring
a
few
different
type
of
things
that
affects
scalability
and
one
of
the
really
interesting
ones
was
this.
A
He
has
vm
churn
here
and
it
actually
what
what
this
means
is
the
number
of
verbs
or
like
api
requests
that
are
made
will
have
an
effect
on
scale.
This
is
this
is
pretty
reasonable,
like
this
is
what
we'd
expect,
but
they,
the
community,
had
quantified
this
in
a
number
of
different
ways
that
I
think
we
can
actually
leverage,
and
it's
something
that
I
think
we
could
focus
on,
because
we
already
have
now
the
metrics
that
give
us
the
number
of
creates,
updates
and
deletes,
and
so
it
would
be.
A
I
really
want
to
focus
kind
of
you
know
once
we
get
a
hold
of
this,
you
know
these
metrics
in
the
periodic
starting
to
get
accurate
counts
and
look
at
this
more
closely
to
see.
If
there
are
ways
we
can
reduce
this,
I
think
actually
having
thresholds
is
kind
of
where
I
want
to
go
with
this,
because
if
we
know
it
affects
scale
the
number
of
requests
and
it's
something
that
we
should
monitor
and
make
sure
that
we're
within
a
threshold.
C
Yeah,
certainly
that's
been
kind
of
the
theory
behind
that
whole
threshold
monitoring
for
the
yeah,
the
thing
that
doesn't
work.
Unfortunately,
the
kinds
of
problems
that
we
find
is
we
get
into
like
quick
update,
loops
and
things
like
that,
where
maybe
two
components
are
fighting
with
each
other
to
update
an
object
and
they
keep
overriding
each
other
or
things
like
that,
and
then
they
stabilize.
But
that's
like
all
unnecessary.
We
didn't
need
to
do
all
that.
C
Like
kind
of
update
storming
and
things
like
that,
and
when
that
gets
multiplied
by
thousands
of
virtual
machines,
it
becomes
a
bigger
and
bigger
problem.
So
yeah
we
definitely
want
to
shrink
the
number
of
api
requests.
We
have
per
avm
during
the
life
cycle
of
the
vm
and
that's
probably
one
of
the
most
important
things
we
can
start
monitoring.
A
Yeah
yeah
there's
that
one
and
then
there
was
another
one
you
saw
vms
per
node
like
there
was
a
few
of
these
that
were
really
intriguing,
but
I
think
ultimately,
and
there's
also
name
space
like
pods
for
name
space.
It
has
an
effect.
So
there's
like
a
number
of
these
things
that
I
think
I
could
totally
see
being
their
own
tests
and
then
having
their
own
thresholds
around
and
so
on
and
so
forth.
A
But
so
it's
really
good
to
see
because
it
actually
provided
a
lot
of
clarity
in
terms
of
like
things
that
we
suspected
so
what's
good
is
that
I
think,
like
you
know,
we're
going
to
start
by
kind
of
talking
about
the
problem,
defining
this
problem
space
and
then
and
then
really
getting
a
grasp
of
like
kind
of
building
these
these
periodics
and
yeah.
But
anyway,
it
still
starts
starts
with
this.
First
one
we
got
to
figure
out
like
just
the
first
one,
with
this,
the
number
of
pre-requests
and
get
that
one
working.
A
But
while
we
talk
about
the
other
ones,
so
I
think
I
think
we'll
probably
take
it's
going
to
take
some
time.
I
think
to
sort
that
out,
probably
another
meeting
or
two
before
we
have
kind
of
a
full
answer
on
on
what
we
do
with
this
loan
generator.
So,
okay,
all
right,
that's
pretty
much!
All
I
have
to
talk
about
for
for
both
points
I
mean:
if
did
people
have
any
other
items?
A
We've
got
another
30
minutes,
people
any
other
items.
I
need
to
drop.
B
A
I
think
you
need
to
think
you
would
need
to
do.
Export,
I
think,
is
the
because
I
don't
know
how
one
api
server
would
have
the
ability
to
accept
the
incoming
objects
from
another
one.
I
think
you'd
have
to
export
the
vm,
but
I
mean
at
that
point
though
I
don't.
I
don't
know
if
it's
live
migration
so
because
I
think
you
the
status,
isn't
running
anymore,
so
I
would
say
no.
A
B
A
controller
of
the
clusters-
and
they
have
one
day
the
cluster
needs
to
be
shut
down
completely.
They
have
a
way
to
move
all
the
load
from
one
cluster
to
another
in
there
is
some
down
time
like
milliseconds,
but
if
the
two
clusters
share
the
same
disk
space,
they
they
are
able
to
to
to
make
it
happen.
A
I
see
so
there's
the
the
same
just
space,
so
I
mean.
Could
this,
be
I
mean,
is
this
live?
Is
it
I
mean
in
terms
of
would
we
would
you
call
this
live
migration,
or
would
you
call
this
like
creating
a
new
vm
and
the
new
cluster
with
the
same
or
very
similar
disk?
You
know
not
exactly
the
same.
You
know
state
and
ram
like
it's
just
kind
of
a
pre-warmed
disc
that
suddenly
just
inherits
the
load.
B
Is
to
make
the
user
doesn't
know
that
something
goes
wrong
like
if
he's
freeze
for
one
or
two
seconds.
This
is
almost
nothing
for
him:
okay
and
everything
back.
Oh,
it's
back
again,
something
freeze
that
that
can
be
a
network
issue
or
something
okay
and
then
everything
back
again.
This
can
be
very
nice
for
the
user
perspective.
A
A
Well,
it's
not
only
the
traffic
but
the
you
know
any
sort
of
states
that
may
be
occurring
on
one
and
going
to
the
other.
B
In
one
place,
move
and
then
restart,
let's
say
we
need
to
sync
the
memory.
That's
the
issue.
A
Okay,
yeah,
I
mean
it's
not
so
I
mean
it
sounds
like
it
sounds
like.
I
think
you
need
to
go
through
sort
of
an
export
process,
especially
for
the
for
the
memory,
because,
like
so
at
least
for
now,
I
I
could
say
like
this,
because
this
is
something
I've
looked
at
like
there
isn't
there
is
some
snapchatting
work
going
on,
but
I
don't
the
live.
Snapshotting
work,
I
think,
is
still
work
in
progress
so
like
meaning
saving
the
taking
a
vm
taking
a
snapshot
of
it.
A
While
it's
running
thinking
of
a
vmi
taking
a
snapshot
of
what's
running
and
saving
ram
that
that
isn't
that's
still
a
work
in
progress,
so
I
think
you
you
need
that
first
and
then
you
need
to
take
that
snapshot
and
you
need
to
have
it
accessible
by
the
new,
the
new
cluster
and
then
you
know,
do
the
restore
over
there.
So
I
mean
in
theory:
it's
there's
a
way
to
do
it.
A
I
don't
know
I
guess
you
could
call
that
export,
but
I
don't
know
if
it's
like
classify
that
necessarily
as
the
same
kind
of
live
migration.
You
have
that
you
have.
You
definitely
have
a
pause
in
there,
while
the
new
vm
starts
up
with
the
new
state
and
stores
can.
B
You
put
on
the
chat
window,
the
guy
that
I
can
talk
to
regarding
the
live
migration
of
of
of
gpus.
A
A
Hey
we
were
talking
earlier
about
a
few
changes
in
being
pools
and
then
about
the
periodic
chat
marcelo
it
didn't.
It
doesn't
quite
do
what
we
expected.
This
is
not
where
we
don't
have
100
concretes.
Unfortunately,
so
I
think
we
still
need
to
do
some
work
here
and
then
and
then
I
think
so
next
week
dave
was
here
earlier.
We
were
talking
about
this.
I
think
he
was
actually
he
was
asking
me
about.
You
know
what
your
perspective
was
with
q
burner
and
with.
A
E
E
They
both
measure
different
things.
One
it's
it's
like
you
know
have
different
use
case,
but
burst
test
can
be
a
use
case.
Like
some
nodes,
you
know
fail
and
then
they
they
they
they
goes
back
and
then,
when
they
goes
back,
they
try
to
create
recreate
thousands
of
vms.
D
E
Know-
and
you
know,
some
nodes
that
that
fails
or
or
a
user
can
want
to
create,
like
you
know,
many
vms
burn
in
a
single
shot,
also,
one
thousand
I
don't,
and
especially
when
we
have
now
these
vm
pools
and
it
can
be
asked
to
have
use
keys
for
the
the
burst.
What
we
we
are
calling
badger
but
burst
burst
is
more
kubernetes.
E
E
Okay,
so
we'll
have
like
a
cycle
of
let's
just
sympathetically:
okay
20,
vms
creation
per
second,
and
when
the
vms,
then
we
have
like
a
maximum
number
of
vms
that
we
can
have
in
the
cluster
when
we
reach
the
maximum,
we
start
to
delete
vms
and
then
recreate
the
ones
that's
been
deleted.
You
know
things
like
that,
so
it's
start
to
cycle,
and
then
it's
measure
how
the
system
behaves
in
this
throughput.
This
achieves
the
steady
state
scenario.
So
that's
what
I'm
saying
statistic.
A
One,
the
steady
state
oneness,
I
think,
is
really
interesting,
especially
to
vm
pools,
especially
because,
if
you
do
any
pre-warming,
for
example,
like
you're
gonna
have
a
constant
number
of
users
in
your
zone
and
then
you're
gonna
have
pre-warmed
no
vms.
So
you're
going
to
have
some
some
you're
going
to
have
a
fairly
large
capacity
constantly.
A
E
B
A
Andre,
how
many
curiosity,
how
many
vms,
are
you
guys
like
running
where
we're
planning
to
run
in
production.
A
E
B
10
000
vms
per
cluster.
A
B
I
can
share
the
screen
and
and
I'll
show
to
you
just
one.
Second,
let
me
meet
that
again.
A
The
reason
I'm
asking
andre
just
so
you
know
like
because
what's
what's
interesting
to
me,
is
at
least
like
people
are
going
to
like,
I
know
like
for
me.
I
can
speak
for
kind
of
what
we
do
internally
like.
This
is
something
that
you
know
we
do
and
we're
interested
in
how
others
are
are
doing
their
large-scale
testing,
and
we
kind
of
eventually
want
to
write
some
guide
or
some
notes
around
this,
because
I
think
you
know
we're
discovering
different
challenges,
so
we're
wondering
how
people
are
doing
and
how
we're
solving
these
problems.
A
B
On
our
solution,
the
users
came
to
our
portal
and
through
our
vgi,
we
access
first,
our
api
that
gonna
control
like
how
many
users
are
in
each
cluster,
and
then
we
have
the
vgi
broker
nearby
every
every
10
000
concurrent
users.
If
you
see
here
the
vgi
broker
talking
to
every
10
000
concurrent
users,
because
we
have
the
limitation
of
activity
directory
that
need
to
handle
every
domain
controller
need
to
handle
up
to
5000
concurrent
users.
B
We
need
hatch,
we
have
two
controlling
every
10
000
and
that's
why
we
have
a
vpc
with
10
subnets,
every
subnet
handling,
10,
000,
concurrent
users,
this
the
single
vpc
handling,
100
000
concurrent
users,
and
to
scale
that
we
have
overlapping
for
ip
of
ips
of
then
slash
xxs
s8,
as
you
can
see,
for
every
hundred
thousand
concrete
users.
This
is
how
we
scale
the
solution
across
the
other
regions
and
also
how
these
vpcs
talk
to
this
vpc.
B
A
B
We
have,
as
you
can
see,
we
have
a
namespace
for
every
domain
controller
for
every
domain.
For
you
understand,
this
is
how
it
it's
handling
for
you
understand.
B
Okay,
the
same
way,
we
have
a
cluster
for
every
100
000
concurrent
users
for
the
file
server
and
they
sync
each
other
across
all
the
file
servers,
because
at
any
time
any
users
can
access
any
can
be
logging
in
in
any
cluster
and
and
at
and
also
at
any
region
behind
our
file
server.
This
is
was
developed
by
us.
B
A
So
is
that
if
I'm
reading,
that
correctly,
is
it
like
you're
having
about?
Is
it
20,
vms
per
name
space?
Is
that
what
it
is
you
scroll
up
a
little?
I.
B
Think
this
is
for
the
domain
controller,
yes,
but,
as
you
can
see
of
the
desktops
we
have
up
to
10
000
on
these
are
not
this
other
cluster
that
this
is.
I
was
showing
you
here
the
relation
with
between
this
vpc
and
the
one
vpc
that
is
handling
that
this
is
only
for
a
zoom
on
a
single
vpc.
B
As
you
can
see
here,
it's
a
single
vpc
here,
for
you
understand,
okay,
how
this
is
talking
to
each
other
okay,
so
you
have.
B
A
B
That
we're
gonna
have
something
around
100
clusters
that
can
go
up
to
one
thousand
two
hundred
and
fifty
notes
each
cluster.
No,
I
mean.
A
I
mean
what
I'm
wondering
is
a
little
bit
lower
like
at
the.
I
was
just
wondering
because
at
the
like,
in
one
in
a
single
cluster,
I
was
wondering
how
many
of
these
vms
per
name
space
like
is
it?
I
think
it's
a
single
name,
space
with
10
000.,
yeah,
10,
000.,
yeah,
okay,
again,
okay
and
the.
A
Any
other
questions,
no,
that's
pretty
cool.
Have
you?
Oh
actually
one
other
question:
how
like
have
you
guys?
Have
you
guys
seen
any
issues
at
this
scale
like
when
you
have
10
amazon,
name
space?
B
Held
up
we
have,
we
have
tested
two
hundred
thousand
concurrent
users
already,
but
we
are
prepared
to
move
our
current
user
base
from
the
old
version
to
the
new
version
and
these
why
we
are
preparing
infrastructure
for
one
million
concurrent
users,
okay,
and
also
we
are
sailing
to
the
end
user
market.
We
don't
know
the
success,
we're
gonna
be
there
because,
for
you
know,
we're
gonna
offer
our
services
for
free
how
this
is
made.
Let
me
just
show
you
one
thing.
B
The
alpha
version
is
already
in
place
and,
as
you
can
see,
the
users
can
log
in
and
when,
when
they
are
logging,
they
can
choice
between
one
of
the
12
flavors.
We
have
for
you
understand
and
after
that,
when
they
are
running
out
of
credit,
they
can
click
here,
see
a
30
seconds
video
and
grab
some
credits.
B
If
they
see
four
videos,
they
can
use
the
entire
services
for
free
for
of
windows
for
30
minutes
or
linux
for
one
hour,
because
linux
is
always
half
the
price.
B
Thanks
well,
nice
to
know
you
guys,
I
I
plan
to
to
always
be
as
part
of
these
meetings.
If
I
can
contribute
with
something
always
ask
me.
B
We
plan
to
put
our
effort
to
build
what
is
missing,
for
we
make
it
the
solution
work.
If
you
can
point
us
on
the
right
direction,
what
is
missing
to
code?
We
gonna
code
it
and
give
back
to
the
community
because
convert
help
us
a
lot
to
make
our
solution
a
reality.
B
B
B
A
All
right:
well,
I
I
don't
have
any
more
items,
everybody,
so
if
there
aren't
any
anything
more
to
discuss,
I
think
we
can
turn
a
few
minutes
early.
Okay,
thank
you
all
thanks
for
your
time,.
E
A
A
E
B
Oh
you
need
to
join.
I
joined
I
I
I
I
I
and
it's
working
right.
A
A
Okay,
all
right
any
other
topics.
B
Okay,
I
would
like
to
ask
ask
everybody
to
test
our
solution
when
we
release
the
beta
version.
Is
there
a
best
way
to
do
it.
E
So
yeah,
if,
if
you
want
to
so
cooper,
has
a
demo
session
and
if
you
want
to
show
a
demo,
especially
just
you
know,
focusing
the
solution
and
say:
oh,
we
are
using
cookware
and
this
is
how
we
deploy.