►
From YouTube: CNCF SIG-Storage Meeting - 2019-08-28
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
What
we're
looking
to
try
and
do
here
is
to
to
help
the
talk,
so
the
sacrifices
do
to
provide
some
help
in
in
reviewing
the
project
and
although
project
dragonfly
isn't
a
fully,
you
know
a
kind
of
like
a
natural
storage
project
that
was
felt
on
the
call
on
the
talk
call
that
it
probably
was
more
storage
related
than
anything
else.
So
so
that's
kind
of
like
the
context
of
why
we're
looking
at
some
of
our
looking
at
dragonfly.
So
with
that
Alan
who's,
one
of
the
dragonfly
containers
is
going
to
be
presenting.
A
B
B
And
the
thank
you.
Everyone
thank
Alex
for
arrangement
for
the
meeting
of
the
sick
storage
and
a
German
flag
team.
Actually,
almost
every
one
of
German
flight
team
is
online
and
lots
of
time
way
to
a
dragonfly
incubation,
stable
blue
for
the
TLC
meeting
and
the
TLC.
Is
they
found
that
we
need
to
talk
to
those
secret
storage
and
Static,
exactly
which
is
introduced
by
the
Alex
as
a
context?
B
B
And
here
is
a
very
brief
description
of
dragonfly
and
the
dragonfly
it
healed,
born
in
Alibaba
at
about
June
2015.
At
the
first
time
it
is
focused
on
the
file
and
the
image
distribution
in
the
Poppa
group
after
November
of
2013.
It
is
open
source.
Actually,
our
full
time.
Dragonfly
have
been
a
fundamental
infrastructure
in
in
Alibaba
Group
every
month
it
will
distribute
about
3.4
PV
later
and
after
about
October.
B
Last
year
we
try
to
redesigns
a
roadmap
of
dragonfly
and
actually,
at
that
time,
cloud
cloud
native
technology
goes
up
very,
very
hot,
so
the
aim
of
dragonfly
is
becoming
to
be
chloral
native.
Actually,
at
that
time,
we've
already
be
adopted
in
production
users
a
lot
of
production
users.
The
number
is
up
to
about
more
than
20
and
we
are
very
lucky
to
join
the
CN
CF
as
a
sandbox
level
afters.
B
After
joining
the
CN
CF
dragonfly
has
been
adopted,
a
royalty
of
industries
and
in
the
following
slides
we
will
talk
about
the
adoptions
of
Jack
and
fly
actually
and
with
the
step
of
joining
CN
CF
dragonfly
integrated
with
kubernetes
ecosystem
software's
very
easily,
and
it
is
a
big
help
from
the
community
that
we
can
later
will
talk
about
integration
of
kubernetes
and
promises
and
harbor
and
some
other
software.
And
last
week
we
were
we've
already
done
the
cnc
of
a
new
review
in
the
future.
B
B
After
about
half
four
years,
refactoring
dragonfly
have
been
now
only
in
return
in
Gollum
and
in
the
NetID
stage
we
are
hopefully,
hopefully
we
we
wish
dragonfly
could
be,
could
enter
the
USDF
intubation
label
and
as
a
name
of
dragonflies
image
and
a
fair
distribution
system
in
cloud
native
Agra
at
first
in
at
first,
we
try
to
tackle
the
image
distribution
issues
in
the
kubernetes
in
the
CN
CF
foundation,
but
actually
dragonfly
can
be
easily
use.
The
to
district,
beautiful
generic
fails
in
your
production,
environment.
A
B
A
B
Thank
you
and
nature.
We
will
talk
about
how
to
support
the
image
distribution.
It
will
introduce
another
component,
but
we
exhaustive
components.
Hd,
it
is
also
can
be
used
at
to
distribute
well,
and
here
is
the
features
of
German
flag
dragonfly.
The
most
important
features
of
dragonfly
can
focus
around
three
key
words.
The
first
one
is
efficiency.
You
know
we
provide
the
p2p
based
image.
B
Distribution
certificate
features
can
improve
the
efficiency
at
your
your
skills,
large
tears
when
you
will
scale
close
and
your
your
image
cooling
as
a
network
auto
neck
of
yours
image,
a
sauce
or
file
sauce
can
exist
and
the
p2p
will
take
advantages
of
every
node
bandwidth
network
event,
please
so
it
could
be.
It
could
help
to
improve
the
efficiency.
In
addition,
it
provides
a
passive
thien
feature
to
avoid
the
rapidity
of
downloads,
so
the
first
one.
First,
the
first
a
part
of
efficiency.
The
second
one
is
low
control.
B
B
The
dragonfly
can
intelligently
with
Garcias
as
element
to
true
charge
where
we're
and
the
wind
to
to
to
download
the
task,
the
rights,
the
disk,
so
the
task
a
level
and
the
hotel
level
nets,
and
at
this
the
speed
limit
could
it
be
to
be
implemented
in
Java
flag
and
it
will
provide
the
disk
and
from
the
hi
hi
hi.
Oh,
so
the
party
is
the
security.
The
security
is
that
super.
No,
the
wheel,
caster
fails
for
the
pure
network
and
some
stuff
l,
honest
I'm,
super.
No,
that
we
should
have
tried
to
encrypt
it.
B
So
so
he
can
transfer
the
secret
data
among
the
network.
Only
the
source
has
a
role.
Data
and
ELISA
ELISA
node
have
a
role
data,
so
it
can
guarantee
no
security
of
it
and
the
fourth
part
it
it's
very
easy,
and
it's
very
simple
and
easy
to
use.
Dragonfly
is
now
invasive
support.
All
kinds
of
container
technologies,
the
multi
popular
one,
the
most
popular
one-
is
darker
and
the
another
cncs
graduation
project,
container
d
and,
as
here
also
punch
container,
which
is
open
source
bye,
bye,
bye,
Alibaba.
B
These
contain
engines
can
invasively
use
dragonfly
to
put
images,
so
the
users
can
put
container
images
of
usual.
This
is
the
feature
of
the
dragonfly.
Now
here
is
a
very
brief
introduction
of
Java
class
community.
We
have
a
incubation
for
requests,
how
the
P
type
/,
CN,
CF,
GOC,
repo
and
everyone
can
leave
your
comments.
If
you
have
any
source
a
dragonfly,
and
last
year
we
already
merged
as
a
sandbox
for
request.
B
Kerensky,
we
are
very
lucky
to
see
dragonfly
project
have
more
than
4
songs
and
if
it
helped
starts
and
action
now
we
have
work
for
the
one
developed
contributors
of
the
ecosystem.
We
lived
11,
formal
releases
and
that
year
release
will
be
October
and
affords
a
maintainer
part.
We
have
seven
maintainers,
of
course,
for
Indians
and
companies
and
the
initial
company
is
Alibaba
Group
and
the
currently
and
zero
are
streaming
more
companies
which
produce
eight
winners.
This
three
country,
as
these
three
companies
are
eBay
eBay
eBay.
B
They
are
using
dragonfly
to
distribute
images
and
anime
to
made
to
act
in
the
vital
dance
might
have
danced
this
to
both.
The
two
companies
are
in
China
and
the
dragonfly
has
more
than
50
public
adopt
adopters,
which
will
be
introduced
later,
and
we
did
not
include
the
private
one.
We
have
more
than
three
song
300
members
in
the
team,
top
developer
groups
and
the
CII
best
practice
impossible.
B
If
you
are
not
familiar
with
dragon
flag.
Here
is
a
very
brief
workflow
of
Chatham
flight
202.
To
illustrate
that
we
can
see,
we
can
see
the
right
part
of
these
slides
you.
We
have
received
a
note
super
node.
We
can
regard
it
as
a
control
control
parts
of
the
dragonfly
system.
It
we're
trying
to
catch
the
images
from
the
registry.
It
will
try
to
catch
the
files
from
the
from
the
sauce
and
it
will
schedule
downloading
requests
at
multiple
networks.
Every
node,
every
node
we
have
a
container.
B
Besides
the
container
engine
we
have
agent,
which
is
a
DF
demon,
EF
daemon
is
used
to
intercept
that
image,
pulling
requests
and-
and
we
are
instructor-
requested
tools
to
stand
up
to
the
EF
get
DF
get.
Is
a
generic
agent
out
in
order
for
the
distribution
model,
P
opinion
p2p
Network,
so
the
component
of
dragonfly
could
consist
of
three
parts.
The
most
important
one
is
super
noted
to
to
control,
to
manages
a
system
and
as
a
DF
get
is
a
generic
proxy
agent
under
node
and
the
DEF
demon
DFT.
B
Then
it
is
used
to
intercept
the
image
pulling
requests.
So
this
took
this
great
component.
Here
is
a
very
brief
workflow
we
can.
We
can
regard
it
as
three
or
six
steps.
The
first
one
is
that
the
container
engine
we
should
to
pull
image
from
the
DF
get
and
it
will
send
a
request,
but
if
DF
demon,
DF
demon
will
intercept
or
the
request
and
send
another
one
to
the
DF
get
and
the
DF
get
will
send
the
pulling
requester
to
the
supernova.
B
At
the
second
parts
that
DF
get,
we
are
seeing
the
pulling
request,
a
super
node.
The
super
node
will
check
if
the
image
it
has
already
existed
in
the
in
the
cache.
If
it's
not,
it
will
try
to
use
the
CDL
feature
to
catch
the
image
from
outer
side
image
registry
like
harbor
after
it's
finished
as
images
it
will
try
to
reply
to
the
requesting
note.
I
have
I
have
already
had
the
hotness
of
L.
Then
then
the
whole,
the
host,
the
agent
will
know.
B
Oh,
the
super
node
has
already
captured
the
image
and
it
will
try
to
get
the
one
block
of
the
image
from
the
Supra
know
that,
true
that
to
true
to
the
knowledge,
skills
and
and
the
for
the
other
peers,
it
will
do
the
same
same
thing
then
after
afterwards
and
the
whole
network,
the
whole
p2p
network.
We
are
finished
the
whole
polity
when
other
blocks
have
downloaded
to
each
node.
So
it
is
a
very
brief
workflow
of
dragonfly
and.
B
Of
course,
container
D
is
supported,
as
will
here
we
have
ecosystem
integration.
Since
this
picture
we
have
such
container,
he
can
be
supported.
It
is
very
easily,
so
we
can
configure
proxy
proxy
configuration
of
the
continuity
to
make
it
its
appointed
to
the
Deaf
demon
address.
Then
the
DF
demon
can
get
the
image
polling
request
and
it
will
intercept
it
to
the
you
get
again.
If
request
have
been
have
been
send
to
the
DF
get,
it
will
be,
a
generic
fell,
pulling
request.
A
B
C
B
B
Of
jagan
full
life,
actually
we
have
a
cluster
in
this
cluster.
We
can
deploy
super
node
in
a
high
availability
mode.
So,
in
this
picture
you
can
see
true
super
node
and
so
supernova
provides
the
three
deal.
Features
and
pubis
gradually
features
under
the
under
the
people.
Know
that
you
can
see
super
intensity
of
get.
The
agent
will
construct
a
pure
network
to
true
to
pose
the
files.
At
the
same
time,
so
we
can
see
a
p2p
network.
B
So
this
is
a
very
effective
job
dragonfly
and
we
can
top
it
later,
and
here
is
a
much
more
specific
active
chava
chatham
fly
super
node.
Actually,
super
Noda
provides
the
api
abilities
for
callers
to
control,
which
kind
of
work
to
do,
and,
above
above
the
PAP
eye,
we
can
see
the
p2p
schedulers
in
putative
schedulers.
We
provide
a
lot
of
scheduling
policies
to
polish.
These
will
be
introduced
later,
there's
a
sparse
nice,
approximately
back-to-back
affinity
and
step
isolation.
Maybe
you
are
not
familiar
with
that,
but
we
can.
B
We
can
share
sat
later
and
Sicilian
managers,
the
CVS
manager,
we
are
to
three
parts
and
it
will
download
the
files
from
the
sauce
from
the
registry
and
I.
In
addition,
it
will
Casa,
since
it
will
try
to
compress
that
foul
to
improve
the
efficiency
to
reduce
storage
size,
and
it
will
also
increase
that
data
in
groups
of
files.
B
Beside
the
CDM
managers.
There
is
a
transmission
part,
the
transmission
part.
We
are
trying
to
try
to
config
the
rate
rate
limits
of
the
each
task,
pulling
tasks
it
will
control,
it
will
control.
The
uploading
uploading
task
sounds
super
node,
because
lots
of
lots
of
requests
are
coming
from
the
agent
to
Tony
request
buttons,
as
you
panelist
dear
need
to
guarantee
it
can
provide
the
fixed
ability
of
for
other
services
and
it
provides
the
size
self-determination
for
the
photo
files.
B
When
every
task
comes
to
the
super
node
different,
the
Supra
Noda,
we
are
trying
to
cut
the
fell
into
many
old
blocks,
but
there
what's
the
size
of
the
what's
the
size
of
the
block,
it
should
be
it.
She
decided
advisor
by
this
part,
and
there
are
it
will
encrypted
data
as
well
for
the
preheater
feet
apart,
preheater
parties
you
lose,
the
fought
for
the
fell,
fell,
fell,
prevent
or
preheating
gesture
Java
takes
a
harbor
integration
as
an
example
when
harbor
center
requested
to
the
super
nodes,
pretty
API
so
super
another.
B
B
Here
is
some
element
which
is
which
we
needed
to
store.
The
first
one
is
blocks
block
data
block,
it's
very
easy
because
in
p2p
Network
we
try
to
distribute
a
small,
smaller
blocks
so
for
single
file.
If
it
is
very
large,
so
we
need
to
divide
it
divide
it,
which
will
cut
it
into
mini
blocks.
It's
a
block.
Data
should
be,
should
be
stored
and
that
should
be
managed
by
the
star
Storage
Manager
at
the
metadata.
B
A
Just
going
to
ask
a
good
question
there
that
block
data,
how?
How
is
it
determined
to
be
sort
of
in
sync,
between
the
different
p2p
nodes
to
do
the
blocks,
have
some
sort
of
have
some
sort
of
checksums
and
and
do
use
something
like
a
myrtle
tree
or
something
like
that
stupid.
If
the
files
between
the
nodes.
A
No,
not
the
policy
to
judge
the
size
of
block,
so
the
question
is
more
along
the
lines
of
how
do
you
compare?
How
do
you
determine
if
the
different
p2p
nodes
actually
have
the
correct
file
or
if
the
file
has
any
changes,
are
the
blocks
check
something
some
wake,
and
can
they
be
compared
between
the
the
source
and
the
destination?
A
B
B
B
B
If,
if
I
know
the
crash
crashes
and
see
will
be
some
mistake
in
the
class
see
aware
me,
there
will
be
some
mistake:
data
in
the
cluster,
so
first
one
is
that
if
the
if
the
mother
has
already
download
the
sample
ox
TIF
blocks,
can
not
provide
the
uploading
services
for
other
other
agent.
This
is
the
first
part.
The
second
part
is
that,
as
the
blocks
blocks
the
data
they
have
already
existing
the
agent,
but
the
super
node
has
a
schedule,
schedule
and
data
of
these
blocks.
These
blocks
should
be
correct.
B
Cheetah,
corrected
CN
for
the
photos.
Photos
taken
of
Signum
parts
of
the
data
will
be
corrected
because,
when
said,
no
the
crashers'
change
a
heartbeat
of
the
super
node
and
super
node
and
the
agent
fails.
Then
it
where
the
ship
ronaldo
will
try
to
look
at
the
agent
as
a
fairly
one,
and
it
will
Mack
the
theta
sub
blocks
in
this.
This
note
that,
should
it
be,
should
it
be
McDowell's
available,
so
so
as
ER
as
appears,
we
try
to
super
know
that
we
are
try
to
schedule.
B
Another
know
that
which
has
the
corresponding
block
sounds
of
value,
know
that
for
other
agent
to
to
choose
reserve.
Yes,
this
is
a
first
apart
and
all
city
of
second
parts.
The
first
part
is
that
when
wait,
the
it
Affairs
so
that
another
appear
try
to
access
the
block
data
firm,
distal,
it's
known,
and
it
will
feel
it
will
feel
when
the
failure
very
failure
times
where
we
increased
up
to
the
fixed
number.
It
will
try
to
report
to
the
situations
to
the
to
the
super
nodal.
A
Okay,
so
I
mean
that
seems
like
a
fairly
sophisticated
kind
of
retry
and
and
rescheduling
kind
of
policy
for
for
for
when
there's
a
crash.
But
ultimately
is
the
data
on
disk
protected
by
some
sort
of
checksum
or
some
hash,
or
something
so
that
you
know
that
you've
got
the
correct
data
and
it
hasn't
been
sort
of
tempered
or
it
hasn't
been
corrupted
in
some
way.
B
B
So
when
we
try
to
distribute
a
blocks
after
the
blocks
is
off,
the
block
is
the
one
part
of
the
encrypted
affair
right
then,
when
we
distributed
to
the
to
the
node
that
actually
wizard
wizard
the
agent
we
have
known
that
receives
the
block,
the
block
is
also
encrypted
when,
when,
when
the
weather
know
they'll
try
to
we
all
that
we
open
a
file,
it
will
regard
we
odds
with
all
the
plugs
into
a
encrypted
file.
Then
it
will
try
to
unencrypted
it.
This
well
I.
A
B
A
E
E
So
I
think
we
still
if
I,
understand,
Alex's,
question
I,
believe
and
I.
Don't
think
it's
been
answered
yet
so
maybe
we
can
take
that
offline.
I
had
another
question
and
my
apologies:
I
joined
the
call
late,
there's
talk
of
a
peer-to-peer
network,
but
then
it
sounds
like
everything
actually
gets
downloaded
through
the
super
nodes.
Is
that
right
so
does
all
if
a
client
is
downloading
an
image
is
it
is
all
the
data
flowing
through
the
super
node.
B
Actually,
no
actually,
not
all
the
data
is
from
the
super
node
super
NOLA
win
and
no
try
to
when,
for
example,
when
ten
knows
want
to
download
the
same
image
in
the
super
node,
we
word
will
cast
a
image
and
we
eat
we're.
First
it
we
are
scheduled
what
a
lot
to
one
node,
then
it
will
record
it
will
record
us
at
one
plot
us
the
earth
in
the
PIO
network,
with
you
already
to
know
that
hasn't
has
a
block
there's
another
one
and
the
super
know
that
and
they
it
will
try
to
schedule.
B
E
B
B
B
We
can
divide
into
Jenna
flies
the
ability
into
two
parts,
the
CDM
part
and
the
the
p2p
art
Presidium
part.
We
will
first,
we
are
trying
to
download
the
file
and
after
we
download
it,
we
try
to
transform
it
in
this
part.
In
this
place,
we
will
compress
the
compressor
fact
and
encrypt
it.
The
compressor
face
is
used
to
improve
the
distributing
efficiency
and
the
encrypt
part
is
to
guarantee
security.
B
After
we
transformed
it,
we
try
to
stop
it.
When
we
started,
we
will
try
to
cut
the
cuts
of
files
into
block
block
C's
plus,
and
we
will
try
to
use
our
encapsulation
logical
to
our
some
headers
for
each
each.
Each
each
block,
in
addition,
will
provide
storage
interface
and
we
can
try
to
start
the
blocks
in
local
physician
or
some
in
memory
file
system
or
some
sort
part
storage
services.
This
is
the
CDN
part
and
I
try
to
introduce
some
storage
related
part
visitors.
B
The
second
part
of
the
p2p
face
the
p2p
face
the
first
one
is
how
to
transport.
Then
it
can
consist.
Some
slaves
raised
three
steps.
The
first
one
is
how
to
construct
the
p2p
Network,
how
to
image
the
p2p
Network
and
every
node
will
try
to
ingest
it
as
a
peer
in
the
Supreme
node.
The
second
part
is
that,
when
lots
of
not
wish
to
proven
same
request
for
the
same
image,
the
Supra
know
how
to
schedule
actually
for
the
questions.
I
have
already
introduced
some
details
about
the
scheduling
distance.
B
When
the
scheduling
scheduling
result
exists,
first
produces,
and
we
said
no,
that
we
will
try
to
implement,
try
to
say
no,
we
were
child
will
try
to
execute
the
downloading
speed
each
other
downloading
parts
for
the
blocks,
and
this
is
the
transport
part.
The
second
is
that
how
to
do
the
hell
control
IO
control
can
be
taken,
can
be
taken
in
fare,
updated
down
to
two
notes,
two
kinds
of
notes:
the
super
node
will
do
the
help,
control
and
the
agent
we
will
be
taking
fetch
at
will.
B
B
We
will
do
some
algorithm
to
improve
the
efficiency,
for
example,
because
every
block
has
a
as
has
his
own
position
of
the
fail.
So
when
we
write
the
blocks
into
the
disk,
we
try
to
find
the
some
some
spam
blocks
nearby
and
we
try
to
write
some
a
bunch
of
web
block
so
who
are
nearby
into
that
disk.
It
will
try
to
make
the
disk
on
writing
to
reduce
some
kind
of
cost
and
we
will
try
to
check
the
check
LaFell
and
the
triplets.
B
B
When
we,
when
the
block
validation
is,
is
successful,
then
we
will
try
to
check-ins
fail
after
the
bail
checking
is
okay.
Then
the
fair
downloading
is
okay.
For
the
docker
part,
we
will
try
to
make
the
immediate
fail
as
a
stream
through
to
to
feed
it
back
to
the
container
engine
and
the
full
of
fell
it.
So
it
is
already
completed,
fell
which
had
already
being
distributed
with
not
valve
TTP,
and
there
is
some
some
procedure
some
detailed
description
here.
B
B
B
If
the
compression
ratio
is
less
than
60%,
it
means
that
it
can
guarantee
efficiency
if
we
do
the
compress
so
the
super
node,
where
do
the
whole
file
compression
compressed
compassion,
we
provides
a
cheese
dip
and
else
before
policy
212
to
the
compression
work
and
just
and
the
third
one
is
a
inclusion
encryption
part.
The
first
one
is
we
use
D
as
a
default
and
then
make
the
cash
bells
encrypted.
As
a
super
node
for
the
storage
cut
we
try
to
our
policy.
B
Is
that
super
into
the
wheel
casa
blocks
in
range
among
one
megabyte
under
50
minute
part
in
the
at
the
bottom
of
the
of
the
blocks?
We
have
a
picture
on
the
second
part
we
each
show
it
showed
four
bits
for
pictures
used
the
two
identified,
the
block
size
of
the
block
and
it's
the
structure
of
the
blocks
organization
in
the
middle.
B
It
is
a
block
data
and
on
the
left-hand
side,
on
the
left-hand
side
of
the
block
data,
it
has
24
bits
to
identify
the
block
data
size
and
the
three
people
to
reserve
to
be
reserved
and
one
bigger
to
two
tiers
or
three
others.
It
is,
though,
if
it
is
compressed
and
the
full
beauty
is
we
used
to
identify
the
block
size,
and
this
is
a
very
simple,
a
encapsulation
protocol
of
dragonflies
block
yeah
I.
D
D
B
D
D
B
Actually,
we
have
considered
these
features
into
the
design
you
see
is
really
big
to
be
reserved.
Actually,
we
can
try
to
mix
taken
advantage
of
this
greatly
to
be
back
compatible
with
this
other
version.
This
is
what
a
we
desire,
but
actually
we
currently
provide
only
such
kind
of
protocol.
We
we
have
never
do
some
protocol
upgrades
written
now.
Currently,
this
is
the
first
question.
F
A
I
mean
maybe
not
all
the
information
is
here
but
I
am
you
know,
I've
got
a
few
questions
to
try
and
understand
how
you
would
actually
store
this
information
because
you
know
just
having
compressed
on
and
off,
probably
isn't
enough
to
tell
you
if
you're
say
compressing
with
two
different
algorithms
like
gzip
or
elles
at
four,
but
also
you
know,
the
s
seems
like
quite
an
old
encryption
standards
and
what,
if
he
wanted
to
use
a
different
encryption
algorithm?
How
would
you?
A
You
can't
really
change
any
of
those
any
of
those
algorithms,
and
it
would
seem
to
me
that,
for
example,
because
you
use
DES
for
encryption,
which,
to
be
honest,
is
is
fairly
old,
is
about
twenty
years
out
of
date.
At
this
stage
you
know
if
somebody
wanted
to
switch
to
something
else
like
AES,
for
example,
I,
don't
see
how
you
would
be
able
to
do
this
without
rewriting
the
on
disk
format.
B
B
B
B
Okay,
let's
next
disconnects-
and
here
is
a
few
key
parts
and
the
procedures
in
Peter
King,
the
first
one
is
the
scheduling
part,
and
we
have
provides
us
some
policies
here
and
the
first
one
is
the
sparseness
sparkles
sparseness
and
the
Supra
know
that
we
are
scared,
use
a
most
as
fast
block
amount
of
p2p
network.
First,
it
means
that
if
we
have
block
1,
we
have
already
catch
the
first
into
many
blocks.
In
these
blocks
we
have
a
block
1
and
after
sometimes
distribution
actually
is
a
block.
B
B
Scatter
super
know
the
weird
schedules
these
blocks
first,
because
we
try
to
make
the
blocks
leverage
the
to
endure
through
among
the
your
network
can
know
that
we'll
have
more
possibilities
to
have
complete
blocks
of
the
fell
st.
will.
It
will
be
also
fails
to
reduce
more
time
cost
and
the
second
policy
is
of
Philly
affinity.
The
finished
part
we
can,
we
can
understand
each
lab
says
the
super
end
of
the
world,
so
super
node
will
choose
the
best
Network
condition,
nor
order
to
schedule
a
scheduler
serve
block
4
example:
10.
B
Tens
of
we
are
tried
to
$10,
we'll
try
to
pull
the
vlog
from
the
supernova,
but
the
network
mission
Monza
10
knows
they
are
very
different
and
disciplined.
Oh,
we
are
try
to
find
the
best
natural
condition,
know
that
and
it
will
schedule
a
blast
to
this
the
snow
day
and
the
third
one
is
CF
isolation
policy.
It
means
that
never
to
schedule.
One
note:
if
it's
already
fear
the
to
schedule
was
actually
the
once
or
twice
the
time
can
be
can
be
concealed.
B
Super
another
weird
limits:
the
parolee
of
his
task
capacity,
dynamically.
If
the
super
nodes
workload
is
very
large
gen,
it
will
dynamically
limited
with
the
task
of
capacity
and
the
foreign
agent
it
also
can.
It
also
can
control
the
numbers
of
task
holdings,
but
it's
not
related
to
the
super
node
scary
scheduling,
they're
gonna
party
downloading.
For
downloading
part,
it
is
regarding
it
is
related
a
wizard
here
to
get
with
the
agent
the
agent
can
dynamically
and
adjuster
download
the
options
its
who
can
dynamically
becomes.
Figs
are
calling
image
downloading
time
on
some
cases.
B
B
We
can
provide
the
net
I/o
bandwidth
limits
and
we
can
limit
the
disk
I/o
for
the
disc
on
right
part.
The
first
of
all
is
that
we
provide
the
position
block
o
set
after
like
what
I
mentioned
in
last
Justin.
Now,
when
a
fellow
is
finished
that,
where
fell,
when
other
plots
of
a
fell
is
finished
downloading
from
from
the
pure
network
to
a
node,
so
the
agent,
the
DF
gate,
should
have
tried
to
combine
all
the
blocks
into
one
node,
but
actually
every
block
has
their
fixed-up.
B
Edition
has
has
its
own
offset
of
the
felt.
Then
we
should
have
tried
to
find
the
position
of
the
block
offset
and
combine
the
nearby
blocks
together
and
try
to
make
it
made,
makes
them
together
to
be
returned
into
the
disk.
This
kind
of
action
to
the
could
be
could
have
reduce
the
time
coaster
when
we
combine
blocks
into
into
a
fell
until
it
can
schedule
the
local
right
sequence
to
reduce,
disconnect
and
cost
and
the
fail
check.
B
The
future
part
is
that,
after
super
after
agent
finish
due
to
downloading
one
block,
it
will
try
to
validate
the
triplet
block.
If
it
fails,
it
can
fail
fast.
It's
a
there's,
no
need
to
download
as
a
box
and
after
we
combine
all
the
blocks
into
one
single
fail.
We
should
try
to
share
the
first
that
we
should
accept
the
integrity
of
the
fell
and
as
if
the
people
part
answer,
some
stowage
later
need
years.
B
B
A
A
B
And
this
part
is
talking
about
the
dragonflies
cloud
native
ecosystem.
Actually,
dragonfly
has
has
succeeded
in
integrating
with
dragonfly
integrating
dragonfly
with
some
clinical
projects.
The
first
two
are
in
some
promise
use.
The
supernova
provides
the
matrix
API
for
the
promises
to
collect
metrics
and
it
can
send
the
metrics
to
the
craft
module
to
the
display,
walk
and
the
second
one
is
continue
D.
Actually,
we
all
know
that
container
B
is
1
C
NC
of
graduation
project,
which
is
container
engine
and
actually
dragonfly
can
be
easily
to
integrate
a
visa
contingent.
B
So
so
the
party
of
the
harbor
harbor
we
struggled
like
can
collaborating
image,
pre
fish
and
it
can
improve
the
efficiency
when
we,
when
all
the
nodes,
try
to
pull
the
images
and
the
channel
flash
you
can
know
that
can
be
used
you
that
can
be
installed
as
a
reserve
team
chat.
Actually,
the
note
the
agent
of
the
TF
gate
could
be
deployed
as
a
kubernetes
teaming
teaming
set.
So
this
is
the
integration
of
dragonfly
wings
cloud
native
ecosystem.
B
B
We
have
China
Mobile,
China
Unicom,
far
away,
and
thus
a
key
is
they
are
using
dragonflies
and
for
the
second
part,
Suri
currents
occur,
ecommerce
companies,
they
usually
they
have
live
Spears
clusters
and
they
are
taking
advantages
of
container.
They
are
the
scare
say,
we're
easily
Metzler
distribution
issues,
so
taobao.com
in
China
and
the
JP,
but
of
calm
in
China
and
shocky.
This
company
is
in
Singapore
of
East
Asia
and
their
little
groups.
They
are
also
using
dragonfly
raikage,
the
cloud
service
providers.
B
The
big
Papa
talk,
louder,
cyclone
and
West
to
see
they
are
using
Java
flash
will
provide
distribution
services
for
their
customers
or
some
live
streaming
companies.
We
are
calm,
Billy
Billy,
they
are
using,
they
are
using
it
and
the
matron
role
of
martyt.
She
knows
they
are
also
very
famous
public
live
service,
providing
vendors
in
China,
so
actually,
actually
lots
of
companies
in
China
say
will
match
the
image,
distribution
issues
and,
of
course,
dragonflies.
B
That
is
your
first
choice
and
there's
a
lot
of
why
the
artificial
intelligence
company
I
flightcheck
say
they
are
very
famous
speech,
intelligent
speech,
intelligence
companies
in
China.
They
also
use
dragonfly
to
distribute
their
very
large
images,
usually
very
alarming.
Your
imagery
sees
Radisson
is
larger
than
20
kilobytes.
B
Now
here
is
a
roll
amount
of
dragonfly.
Firstly,
the
feature
we
will
try
to
make
the
supernova
HJ
to
be
to
be
more
mature
and
actually
you
you
will
see
dragon
fly
through,
deploy
dragon
fly.
We
still
need
to
deploy
us
super
known
or
true
super
nodes
to
provide
the
hla,
but
how
about
essential
life
scheduling
about
makes
a
scheduling
ability
to
exist
in
each
node.
This
is
a
base
concept
eBay's.
They
are
providing
a
p2p
p2p
distributing
abilities.
B
We
insulted,
deploy
in
a
supernova,
we
will
say
we
were
trying
to
make
every
every
agent
to
record
the
scheduling
system
and
try
to
make
the
agent
self
Gavin
means
distribution.
Data
and
the
people
I
base,
my
toner
is
to
decide
this
part
and
try
to
make
it
and
some
flexible
plan,
B
framework
in
hosta,
encryption,
more
file,
transfer
protocols
and
in
addition
we
try
to.
We
hope
we
provides
a
very
good
dragonfly
UI
to
to
be
user-friendly
for
the
scenery's.
B
Actually,
we
currently,
we
provide
the
dragonfly
in
some,
unlike
business-critical
machines,
but
actually
we
found
that
we
should
do
some
optimizations
when
our
environment
changes
from
the
physical
machine
through
cloud
disk.
So
the
I/o
condition
and
the
network
commission
made
can
be
totally
different
from
the
physical
machine.
We
will
try
to
do
the
performance
optimization
and
actually
we
got
lots
of
work
featured
amount
from
the
iot
sceneries
lots
of
devices
they
try
to
distribute
the
data.
B
E
B
A
So
I
think,
and
and
Quinten
please
feel
free
to
interject
here
I
think
we'll
some
of
the
next
steps
are
will
callate
some
of
the
information
and
any
other
questions
that
we
have.
But
we're
also
will
also
need
to
probably
speak
to
two
or
three
of
the
users
of
dragonfly
to
to
sort
of
better
understand
how
they're,
using
it
in
production
and
their
use
cases
and
that
sort
of
thing,
because
that
that's
required
for
the
due
diligence.