►
Description
Stanley Bishop and Gordon Gould from New Atlantis DAO take us through their use of genomic data processing pipelines on IPFS and Bacalhau to improve the protection of sensitive marine waters.
https://newatlantis.io/
Sid and Shaki from Polybase take us through their new Database platform built on open standards (IPFS, libp2p) and how Web3 users can more robust applications using this new technology.
https://polybase.xyz/
A
And
we're
live
all
right.
Everyone
thanks
for
joining,
live
with
us
thanks
for
joining
remote.
We
are
super
excited
I,
think
this
is
our
14th
computer
over
data
remote
session.
So
we
are
very
proud
to
to
welcome
the
new
Atlanta
style
team
and
also
the
polybase
team
is
joining
us
today,
much
much
love
for
the
people
building
in
the
space
and
the
people
building
on
top
of
bacliao.
A
Let's
start
with
just
a
brief
advertisement:
we
have
compute
over
data
Summit
coming
up
here.
It's
going
to
be
May
9th
through
10th.
So
if
you
haven't
submitted
a
call
for
paper,
we'd
love
to
have
anyone
in
the
community
talk
at
the
event
promote
the
work
that
you're
doing
or
if
you're
a
user,
and
you
have
use
cases
and
you
need
to
build
on
top
of
decentralized
and
compute
over
Data
Solutions
we'd
love
to
have
that
as
well
or
if
you
just
want
to
attend
and
and
just
kind
of
get
up
to
speed.
A
B
Yay
so
excited
to
talk
about
New,
Atlanta
style
today.
Thank
you
guys
for
having
me
and
as
a
data
science
scientist.
Thank
you
for
I,
know,
I'm
going
to
say
it
wrong.
B
Buckle
out,
Buckle
out,
you
know,
have
been
wrangling,
complex
data
systems
for
most
of
my
career
and
a
lot
of
times
it's
painful
and
a
lot
of
times
like
you,
don't
feel
like
you're
putting
your
work
in
where
it
would
serve
the
best
purpose
and
I
think
that
what
you
guys
are
building
is
going
to
do
so
much
to
help
address
that
with
that
being
said,
let
me
Dive
Right
In,
so
we
will
be
exploring
new
atlantis's
first
decentralized
protocol
tool
and
telling
a
little
bit
of
the
new
Atlantis
story
and
then
explaining
how
you
know
open
protocols
on
decentralized
compute.
B
Are
you
know
something
that
we
feel
is
going
to
be
key
to
saving
our
oceans
so
just
to
kind
of
set
the
table?
If
you
see
my
screen,
I
am
here
logged
into
the
new
Atlantis
cloud
lab,
which
is
at
lab.newatlantis
dot
IO,
and
that
is
our
access
point
to
the
new
Atlantis
research
cluster.
Our
goal
is
through
this
lab
to
integrate
every
tool,
data
and
resource
needed
by
Marine
scientists,
so
they
can
do
their
work,
unencumbered
and
yeah.
B
We're
looking
at
the
first
fruits
of
those
labors
right
here
so
anyway,
we'll
be
going
through
a
little
presentation
about
new
Atlantis.
That
will
then
return
to
the
particular
tool
we've
been
building
and
we're
right
here
in
the
new
Atlantis
protocol
notebook.
So
this
would
be
the
sort
of
the
part
of
the
the
decentralized
protocol
iceberg.
That,
for
us
sticks
up
out
of
the
water
and
is
is
where
scientists
can
access,
learn
about
and
work
with
the
protocol.
B
So
the
the
protocol
that
this
little
discussion
will
orbit
around
just
to
give
it
a
little
intro
is
called
human
and
it
is
a
traditionally
clinically
applied
metagenome
analysis
pipeline
that
lets
you
take
a
complex
population
of
microorganisms
and
actually
see
what
they're
doing
chemically
they'll
be
more
of
a
chance
to
talk
about
this,
but
just
to
kind
of
review.
We
got
all
of
our
code
here
nicely
hidden
and
that's
a
little
bit
of
the
idea
to
let
the
scientists
focus
on
the
science
anyway.
B
I'm
on
to
the
presentation
and
our
demo,
so
here
we
have,
you
know
the
the
interface
for
accessing
the
protocol
and
then
I've
also
snuck
in
my
my
presentation,
slides
right
here
in
inside
the
presentation.
So
hey
welcome
to
new
Atlantis
Dao
and
our
mission
statement.
B
New
Atlantis
seeks
to
address
the
twin
challenges
of
climate
change
and
biodiversity
loss
by
aligning
Community
government
industry
and
individual
benefit
with
the
improving
ecological
health
of
our
oceans,
the
healthier
our
oceans
become
the
more
value
generates
for
Relevant
stakeholders
and
and
how
are
we
going
to
do
this?
Well,
it's
the
Desi
ethos
right.
It's
the
idea
that
all
of
the
resources
are
on
the
table
to
solve
this
problem,
and
it's
about
connecting
the
dots
and
it's
about
getting
people
from
very
different
backgrounds
working
together.
So
our
core
constituencies
are
scientists.
B
You
know
technologists
and
business
and
economics,
leaders
and
and
new
Atlantis
wants
to
be
they're
right
at
the
center
developing
the
tools
and
scaling
the
tools
to
create
a
leading,
open,
Marine
biodiversity
data
platform
that
will
harness
the
collective
intelligence
of
all
of
these
Global
communities.
But
why?
But
why
and.
C
B
Me
if,
if
this
part
gets
a
little
scary,
we
are
are
headed
towards
the
cliff
y'all,
don't
mean
to
be
morbid,
but
for
for
the
duration
of
this
presentation,
half
of
the
breaths
you
take
were
the
oxygen
was
created
by
the
plankton
in
our
oceans
and
the
Plankton
are
out
there
every
day,
working
hard
for
us,
but
they're
under
a
lot
of
pressure.
B
The
ocean
is
warming.
This
greatly
throws
off
the
interactions
between
the
different
species
that
are,
you
know,
doing
work
for
us
out
there
in
the
oceans,
and
we
don't
got
the
foggiest
idea
how
fast
the
bottom
could
fall
out
on
this
or
how
close
we
are
to
the
edge
and
so,
for
example,
one
of
the
big
things
that's
happened
for
me
this
week
is
you
can
see
right
here.
B
There
was
a
fairly
alarming
article
in
the
Washington
Post
that
shows
that
this
year,
in
particular,
we're
experiencing
unprecedented
warming
and
it's
to
a
point
that
is
bringing
up
a
lot
of
concern
on
the
you
know,
the
the
implications
on
the
health
of
you
know
the
the
planktonic
community
in
the
oceans,
which
you
know
is
kind
of
the
foundation
of
both
the
air
we
breathe
and
the
you
know
the
food
web
that
we're
you
know
perched
atop
of
so
anyway.
How
do
we
deal
with
this
right?
B
Well,
the
the
first
way
to
treat
a
sick
patient
is,
to
you
know,
gather
data
and
do
some
tests
and
that's
what
new
Atlantis
is
here
for
so
right
now,
I'm
just
going
to
say
a
little
bit
about
what
metagenomics
is
so
you
know,
imagine
grabbing
a
glass
of
ocean
water.
B
How
many
different
species
of
microorganisms
are
present
in
that
water?
How
many
fragments
of
DNA
from
fish
and
whales,
passing
through
how
much
bacterial
DNA
is
in
just
that
glass
of
water?
Well,
it
turns
out
a
whole
heck
of
a
lot
of
DNA
is
in
just
a
single
glass
of
water
and
in
the
same
way
you
can
take
a
patient's
blood
to
understand
the
the
state
of
a
patient.
You
can
do
the
same
with
the
ocean,
so
here
is
just
a
little
caricature
of
what
our
pipelines
look
like.
We
imagine
here
on.
B
The
the
DNA
is
extracted
by
scientists
to
create
what
are
called
raw
Reeds,
so
these
are
little
puzzle,
pieces
or
fragments
of
a
total
genome,
and
then
the
computers
and,
of
course,
here,
there's
quite
a
bit
that
I'm
glossing
over
in
this
slide,
put
it
all
together
into
the
full
genomes,
and
then
we
get
the
ability
to
C,
understand
and
explore
the
state
of
this
incredible
food
web
that
we
rely
on.
B
Let
me
tell
you,
though,
it
ain't
easy.
So
if
you
see
my
screen
right
here,
we
have
the
very
rough
topology
of
our
Alpha
metagenome
pipeline.
So
the
data
from
its
raw
read
form
has
quite
a
journey
to
come
out
the
other
end
as
actionable
insights
and
if
you've
ever
done,
data
science
in
bioinformatics.
This
is
standard
and
in
putting
something
like
this
together,
you
will
encounter
versioning
issues.
B
You
will
encounter
computational
runtime
issues
bioinformatics
as
a
field
of
computer
science
is
actually
one
of
the
least
stable
and
for
very
good
reasons.
It's
one
of
the
hardest.
You
need
to
coordinate
execution
of
the
most
complex
computational
software,
the
most
complex
scientific
software
and
then
a
lot
of
the
databases
and
the
tools
are
stored.
All
over
the
place,
so
I
have
to
say
I
think
this
slide
was
masterfully
done
by
our
head
of
community
JJ.
B
This
is
the
topology
of
a
metagenome
project
before
Buckle
out
and
I
know
any
technologist
in
the
audio
audience
know
what
it
feels
like
to
hold
something
like
this
in
your
head.
Where's,
this
piece
where's.
That
piece
did
they
upset.
Did
they
did
they
have
that
this
database
is
the
new
database
on
the
FTP?
B
Is
the
computer?
Oh,
my
goodness,
it
gets
kind
of
crazy
and
I
I
have
to
say
I'm,
always
just
so
impressed
that
in
in
the
sort
of
traditional
science,
scientific,
Computing
Paradigm
any
of
this
is
possible.
You
know
it's
only
through
great
work
and
you
know,
dare
I,
say
genius
on
the
part
of
our
scientists,
but
it
doesn't
have
to
be
this
way.
Thanks
to
you
guys.
B
So
here
is
the
New
Atlantis
open,
Cloud
platform
solution,
so
the
first
step
is
to
gather
our
data
sets
and
our
databases
and
our
tools
and
make
sure
all
of
those
are
registered,
validated
and
filed
on
ipfs
or
a
similar
decentralized
data
solution.
B
B
So
scientists
get
to
work
within
the
tools
they're
familiar
with
electronic
notebooking
systems
like
Jupiter,
and
they
have
ready
access
to
dispatch
their
protocols
to
a
back
end
flexibly
and
they
can
take
advantage
of
things
like
the
Buckle
out
back
end
and-
and
this
is
so
important
from
a
stability
perspective,
but
it's
also
very
important
from
a
transparency
and
a
valid
validation
perspective.
B
You
know
in
order
for
these
tools
to
be
used
ethically
and
not
subverted
everything
about
their
accuracy
and
and
the
the
quality
of
the
data
going
into
them
will
need
to
be
transparent.
So
you
know
just
this:
Mission
aligns
so
well
with
the
principles
of
decentralized,
Science
and
barcalao
anyway.
B
I
hope
that
was
all
right
on
to
the
demo.
So
forgive
me
I
am
not
the
UI
guy
I'm,
definitely
Mr
Mr
backend.
So
this
will
be
my
best.
Caricature
of
you
know
something
fancy
but
hey
right.
Here
we
have
our
data
selection
and
onboarding
widget,
and
so
this
is
plugged
directly
into
an
ipfs
node
on
our
cluster
and
it
allows
scientists
to
access
generate
an
onboard
data
sets
to
ipfs
or
a
different
decentralized
backend.
B
For
the
purposes
of
this
demo,
we'll
be
working
with
the
following
set
of
sequence:
read
archive
files.
These
are
files
that
were
taken
from
a
scientific
mission
called
aloha
that
seek
to
capture
a
warming
related
planktonic
data
set.
So
going
on
to
here
we
got
our
data
Atlas,
so
early
stages
of
working
with
the
system,
but
you'll
see
we
have
a
interactive
Atlas
with
all
of
our
meta
genomes
displayed
on
the
map.
B
So
each
of
these
boat
symbols
is
actually
a
place
where
the
scientific
Expedition
went
out.
Collected
water,
extracted,
DNA
generated
raw
Reeds,
put
the
raw
Reeds
together
into
metagenomes,
and
then
you
know
shared
their
shared
their
data.
B
B
That's
been
interfaced
with
our
data
ADLs,
so
we
can
actually
zoom
in
and
see
sequence
location
by
sequence,
location,
what
The
Temper
the
relevant
temperature
patterns
are,
and
then
here's
where
this
gets
really
interesting
so
on
to
the
main
component,
we're
showing
off
today,
which
is
called
the
human
gene
cluster
analysis
Suite.
So
this
is
a
protocol
that
will
be
running
unbox,
allow
on
each
of
these
data
sets
and
that
actually
tells
us,
let's
call
it
the
economics
of
the
local
planktonic
metagenome.
B
So
here's
what
that
looks
like
so
here
we
have
each
of
the
weeks
of
this
experiment.
So
these
these
data
positions
were
were
collected
on
a
time
lapse.
Every
every
few
weeks
a
new
set
of
data
was
collected
and
each
week
we
can
pull
up
the
following
Gene
cluster
analysis.
So
for
that
metagenome
collection
that
was
taken
on
week,
three
2003.
B
we
get
this
heat
map,
which
tells
us
how
much
chemical
biological
activity
associated
with
each
of
these
Pathways
happened
in
the
planktonic
community.
During
that
point
in
time,
wow
look
how
much
stuff
these
planks
interrupt
to.
So
each
one
of
these
things
is
a
different
biological
pipeline.
You
might
call
it
that
the
Plankton
carryout
each
of
these
is
associated
with
different
biological
processes
and,
interestingly,
as
an
example,
many
of
them
are
associated
with
photosynthesis,
and
so
that's
just
one
example.
B
We
can
go
through
week
by
week
and
look
at
this
heat
map
to
understand
how
the
biological
activity
of
the
food
web
is
changing.
So
we
can
perform
what
would
be
called
translational
analysis
to
correlate
things
like
how
does
temperature
affect
the
ability
of
the
planktonic
food
web
to
generate
oxygen?
B
That
would
be
some
of
these
bars
to
sequester
carbon.
That
would
be
some
some
other
of
these
bars,
but,
as
you
can
see,
it's
it's
really
just
scratching
the
surface
of
all
of
the
chemistry
that
Plankton
carry
out
for
us.
Sometimes
at
new
Atlantis
we
like
to
joke
that
the
the
biggest
and
most
important
bioreactor
in
the
world
is
and
always
will
be
the
ocean,
and
it's
it's
really
true.
You
know.
B
Most
of
the
chemistry
that's
ever
been
done
in
history
was
done
in
the
ocean
by
the
Plankton,
and
it's
on
us
to
keep
that
going
so
anyway.
This
is
just
a
starting
point
for
new
Atlantis.
As
you
can
imagine,
this
type
of
science
is
extraordinarily
complicated
and
there's
a
large
number
of
tools
in
the
open
and
closed
source
that
need
to
be
made
available
for
scientists.
But
you
know
this
is
our
starting
point
for
building
a
best
in
the
world
open
protocol
system.
B
That
will
be
the
obvious
choice
for
any
marine
scientist
and
we're
hoping
it'll
lead
to
the
kind
of
collaboration
we
need,
because
in
in
order
to
to
act
quickly
and
decisively,
we're
we're
not
going
to
be
able
to
waste
time
on
coordination.
We're
going
to
have
to
get
everyone
passionate
about
Solutions
like
at
the
table,
building
together-
and
you
know,
like
I,
said
to
start
the
talk
really
grateful
for
you
guys
for
building.
B
A
Fantastic
wow
Stanley
this
is
this
is
incredible.
It's
it's
funny.
You
know,
because
Gordon
and
I
were
joking
months
ago
about
the
science
bucket.
You
did
the
science
bucket
in
the
ocean.
It
comes
back
with
all
this
incredible
information,
but
you've,
you've,
really,
you
know
you've
you've
made
it
happen.
So
it's
fun
to
see
this
art
all
coming
together.
A
Could
you
say
anything
more?
You
know
you
talked
a
little
bit
about
decentralized
storage
with
ipfs
was
an
important
part
of
the
solution
for
the
team
and
then
using
decentralized.
Compute
was
also
important
for
the
users
of
New
Atlanta
style,
whether
it's
Market
participants
or
people
that
want
to
understand
how
the
data
is
being
measured.
Could
you
give
any
perspective
on
decentralization?
B
Yeah,
so
so
transparency
is
a
big
one.
Certainly,
validation
is
a
big
one.
One
thing
that
Gordon
is
very
passionate
about.
Is
the
idea
that
this
has
to
be
done
economically,
it
can't
be
charity.
You
know,
like
people
need
to
get
rewarded
for
creating
value.
That's
the
only
way
it's
going
to
happen
fast
enough.
B
So
having
you
know,
things
on
the
blockchain
allows
for
this
to
be
done
in
a
fiduciary
way.
You
know,
but
there's
some
even
more
important,
functional
things
that
decentralization
does.
When
you
start
to
talk
to
the
scientists
who
are
working
at
the
top
Marine
institutes,
you
really
quickly
discover
something
kind
of
interesting.
Each
Institute
is
very
specialized,
so
just
off
the
top
of
my
head,
one
of
the
big
institutes
has
orders
of
magnitude.
B
More
Marine
call
it
Naval
capacity
than
the
other
institutes,
so
they
have
most
of
the
data
that
you
actually
have
to
go
out
and
collect.
There's
other
institutes
that
specialize
in
the
bioinformatics
and
the
data
science.
So
when
it
comes
to
protocols,
they
have
the
most
important
and
interesting
Technologies
and
then,
as
a
third
example,
the
the
other
Main
Marine
Institute
that
we
communicate
with
has
the
best
ecological
and
animal
specific
experts.
So
you
know,
if
you
think
about
it,
to
really
understand
how
to
to
help
an
ecosystem.
B
You
need
to
get
the
data,
so
that's
the
first
Institute
you
need
to
have
good
protocols
to
run
on
the
data
that
can
give
you
accurate
insights,
and
then
you
have
to
have
Specialists
on
the
animals
themselves
there
to
give
insight
into
what
actions
to
take
based
on
the
data
that
exists.
So
you
know
these
scientists
are
completely
focused
on
pushing
their
particular
field
further
and
they're,
not
necessarily
focused
or
having
the.
D
B
To
do
you
know
a
standard
open
source
code
base
to
allow
their
stuff
to
be
shareable
and
modular.
So
that's
where
I'm
kind
of
imagining
new
Atlantis
has
a
role
to
play
through
operationalizing
these
decentralized
compute
data
Technologies.
We
want
to
set
the
table
that
is
inviting
to
all
these
different
scientists
and
researchers
to
come.
You
know
sit
down
at
and
you
know
scoop
some
of
their
contribution
into
our
spicy
data.
Stew,.
C
E
C
I
I,
just
if
I
could
just
add
to
that
too.
A
point
to
Courtney
and
I
have
been
extremely
focused
on,
as
Stanley
alluded
to
is,
is
making
this
a
for-profit
entity,
but
part
of
that
means
that,
like
people
have
to
buy
into
it
and
trust
it
and
the
really
the
only
way
you
know,
because
if
you,
if
you
take
a
step
back
the
oceans,
are
a
global
they're,
probably
like
the
biggest
example
of
a
Global
Commons
right
I
mean
it's
literally
70
of
the
planet
and,
for
you
know
a
bunch
of
Californians.
C
If
we
took
the
view
that
we
can,
just
you
know,
sort
of
privately
determine
what's
valuable
in
the
ocean
and
kept
it
all
closed
source
that
isn't
a
particularly
inviting
stance
to
take,
and
it
doesn't
really
invite
a
lot
of
open
collaboration
and
for
the
metrics
and
and
quantification
services
that
we're
providing
to
be
widely
considered
as
valid
and
to
underpin
pricing
models
for
biodiversity
credits
and
related
risk
models.
You
really
need
to
have
that
Collective
buy-in.
C
The
only
way
to
really
we
do
that
is
by
having
broad
you
know,
broad
accessibility,
complete
transparency
and
the
ability
for
people
to
participate
according
to
their
contribution,
and
so
you
know
the
infrastructure
that
you're,
you
know,
and
you
know,
protocol
Labs
more
generally
is
putting
into
place,
is
really
I.
Think
we're
starting
to
see
the
limits
of
all
these
centralized
systems,
whether
it's
banking
or
cloud,
and
that
if
we
want
to
have
an
open
and
really
fair
system,
it's
going
to
have
to
be
decentralized.
C
A
It's
such
a
great
example.
Even
you
know,
speaking
with
other
folks
in
the
decentralized
science
space
we
think
about
academics.
There's
this
big
notion
of
fair
data,
access,
findable,
accessible,
interoperable,
reusable
even
in
traditional
Academia,
and
the
examples
you
give
about
different
organizations,
different
research
institutions
working
together,
then
the
you
know
the
ability
to
scale
this
to
participants
that
are
outside
of
that
community
and
give
them
the
right
incentives
to
participate.
It
seems,
like
you,
guys,
have
a
really
a
really
interesting
platform
coming
together.
So
super
excited
for
you,
guys,
yeah.
C
I'll,
just
yeah
and
I
was
gonna,
say
one
last
thing:
Stanley
uses
a
term
and
my
connection's
a
little
spotty,
so
I
don't
know
if
he
actually
used
this
term
or
not
in
the
presentation,
but
he
has
this
concept
of
like
batteries
included
and
his
point
about
the
you
know
the
bio
guys.
It's
like
sorry,
I
should
look
at
this.
He
was
looking
up
my
nose
while
I'm
talking
sorry,
my
camera
is
too
low.
C
The
you
know
the
the
idea
that
that
these
platforms
can
just
work
and
that
you
can
allow
scientists
to
do
the
science
and
spend
a
lot
less
time
on
the
technology.
Infrastructure
is
like
I
think
that
that's
going
to
be
a
great
accelerant,
I
think
that'll
I
think
that'll
end
up
accruing
a
lot
of
value,
but
both
to
new
Atlantis
and,
to
you
know
the
overall
about
layout
thesis
about
decentralized
science
and
accessibility.
A
You're
so
right
I
mean
we
talked
to
researchers,
often,
and
you
know,
to
Stanley's
Point
who
are
star
for
resources.
You
know,
compete
resources,
and
maybe
they
get
some
through
a
grant
and
they
try
to
put
together
Cloud
resources
where
they
can.
But
if
you
could
invoke
that
compute
on
demand
and
you
could
pay
for
it
or
you
could
have
a
community
component
with
different
incentives
to
pay
for
that
compute
for
the
researcher.
There's
lots
of
interesting
scenarios
that
opens
up
for
the
researcher
yeah.
A
The
Easy
Button
all
right
well,
thank
you
guys
so
much
that
was
incredible.
I'm
gonna
post
some
links
into
the
slack
channel
for
everyone
to
follow
up
with
here
in
a
little
bit
Stanley.
If
you
want
to
send
those
my
way
out
or
Gordon
I'll
make
sure
we
post
all
that
information
for
you
folks
and
then
from
the
polybase
team.
Super
super
interested
to
learn
more
about
your
guys
platform
and
I
can
hand
it
over
to
you
guys
if
you're
ready.
E
Awesome,
thank
you
so
much
Wes
yeah,
so
welcome
everyone.
I'm
super
excited
to
talk
about
polybase
I'll.
Tell
you
a
little
bit
about
the
background
story
of
how
we
started.
Polybase
I'll
tell
you
about
kind
of
what
it
is
and
what
problems
it
solves
and
then
I'll
tell
you
about
what
people
are
building
on
top
of
it
and
then
happy
to
answer
any
questions
after
that.
E
E
Get
me
an
Uber
I
then
worked
at
Cruise
building
simulation
infrastructure
for
autonomous
vehicles
and
I've
always
had
a
kind
of
an
interest
in
building
really
good
tools
for
Developers,
and
so
when
I
left,
my
last
job
I
teamed
up
with
Callum
and
we
started
talking
to
a
lot
of
web3
developers.
E
We
ended
up
having
about
100
or
150
conversations.
These
were
kind
of
almost
like
user
interviews,
where
we
started
understanding.
What
are
the
tools
that
what
the
developers
are
struggling
with
today?
What
are
the
challenges
they
have
and
what
would
they
like
to
see
built
out
of
that?
E
We
basically
pulled
out
a
Common
Thread,
which
was
what
the
developers
today
don't
have
a
good
database,
that's
decentralized
that
has
web3
permissions
like
wallet-based
authentication,
role-based,
Access,
Control
built-in,
and
so
they
were
kind
of
Jerry
rigging,
a
bunch
of
different
tools
together
to
to
make
it
work,
so
they
could
build
their
decentralized
applications.
E
We
basically
took
that
problem
statement,
wrote
Our
white
paper
in
the
summer
of
last
year
and
started
fundraising.
We
raised
our
pre-seed
round
and
then
we
got
to
work
building.
We
actually
launched
our
test
net
in
November
of
last
year
and
since
then
have
gotten
a
ton
of
developer
feedback
and
interest
in
building
apps
on
top
of
polybase.
We
just
completed
this
week,
our
first
hackathon,
where
we
had
over
75
teams,
building
dapps
on
top
of
polybase
and
I'll
talk
a
little
bit
later
about
what
those
use
cases
are.
E
So
that's
where
we
are
today.
We
have
our
main
net
launch
scheduled
for
the
end
of
Q2.
That's
going
to
bring
kind
of
all
the
production
level
features
that
our
customers
have
been
asking
for
and
be
able
to
ship
those
for
them.
E
So
that's
kind
of
our
founding
story
kind
of
rolling
back
a
little
bit.
Our
mission
is
to
restore
Humanities
control
of
its
information,
and
so
what
that
really
means
is
that
we've
seen
information,
collect
collection
and
exploitation
at
scale
which
has
really
kind
of
eaten
into
people's
rights
and
controls
over
their
own
data.
You
think
about
politics,
you
think
about
Commerce,
you
think
about
culture.
E
This
is
happening
across
the
field,
and
so
our
solution
really
to
that
is
self-sovereign
data
and
our
vision
to
solve
this
problem
is
to
build
a
single
database
for
the
world,
that's
developer,
friendly
and
then
encrypted
and
self-sovereign,
and
so
you
can
kind
of
think
of
it
as
the
database
version
of
filecoin.
So
filecoin
is
a
great
place
to
store
files.
Polybase
is
a
great
place
to
store
structured,
indexed
and
queryable
data.
E
E
Using
things
like
time-based
access,
you
can
do
groups
and
roles
you
can
dive
into
kind
of
cool
stuff
where
an
nft
can
actually
be
the
permission
to
access
a
particular
set
of
Records
in
the
database
and
the
cool
thing
is.
You
can
actually
sell
that
nft
on
secondary
markets,
which
makes
access
to
your
data,
something
that
people
can
buy
and
sell,
which
is
also
an
interesting
use
case
and
then
the
last
part
of
it.
E
So
that's
kind
of
the
overall
product,
I'll
kind
of
talk
about
the
main
benefits
that
it
gives
to
developers.
So
the
first
one
is
integrated,
auth
and
permissions.
So
today,
if
a
developer
wanted
to
create
a
DOT
that
had
wallet
based
login
that
had
wallet-based
permissions
nft
token
gating,
they
would
have
to
use
a
bunch
of
different
Services,
have
some
kind
of
Ock
middleware
that
processes
these
things
and
then
manage
that
over
time.
E
Some
of
our
customers,
like
Dows,
find
this
very
hard
to
do
when
they
have
dozens
of
people
coming
in
and
out
of
a
dow,
maybe
doing
work
for
a
couple
hours,
or
maybe
their
long-term
kind
of
dial
members,
and
so
we're
kind
of
seeing
this
model
towards
easier
to
manage
permissions
being
really
important.
So
we've
built
that
directly
into
polybase
is
high
performance
at
a
low
cost.
If
you
wanted
to
build
a
lot
of
these
applications
fully
on
chain,
it
would
be
infeasible
even
on
lower
price,
lower
cost
chains
and
scaling
Solutions
like
polygon.
E
It
would
be
extremely
expensive
to
build
a
lot
of
these
systems
or
you
have
you
know
thousands
of
transactions
per.
Second,
you
can
imagine
something
like
decentralized
social,
where
it
just
won't
be
feasible,
and
so
the
cost
and
performance
of
polybase
is
very
similar
to
a
traditional
database
like
postgres
or
mongodb,
and
so
you're
not
really
worried
about
per
transaction
costs
and
the
costs
are
paid
by
the
developers.
So
users
don't
have
to
worry
about
gas
fees
and
then
the
last
bit
is
zero
knowledge,
proofs
and
so
integrating
zero
knowledge.
E
Proofs
today
into
applications
is
actually
quite
difficult.
The
tools
don't
really
exist
that
make
that
easy.
What
we've
done
is
built
this
directly
into
the
database
layer,
so
it
becomes
very
easy
to
do
things
like
cryptographically
prove
attributes
about
private
data,
publicly
verify
your
business
logic
and
show
that
your
business
logic
is
actually
being
applied
to
data
fairly,
and
then
all
of
that
actually
enables
self-sovereign
data,
which
means
ownership
in
control
of
the
data
that
that
one
produces.
E
Our
API
is
really
simple.
It's
it's
kind
of
like
you,
create
a
collection
which
is
a
table.
You
create
records
in
that
collection
which
are
the
rows
of
that
of
that
database,
and
then
you
can
query
it
so
very
kind
of
simple
known
kind
of
API
way
of
accessing
it.
E
I'll
kind
of
jump
into
a
couple
of
the
use
cases
that
we've
seen
so
far
that
have
been
interesting.
The
first
one
has
been
decentralized
social,
so
starting
kind
of
the
end
of
last
year,
when
we
had
the
Twitter
takeover,
people
have
been
thinking
a
lot
and
building
a
lot
in
the
decentralized
social
space.
The
problem
being
the
problem
there
for
developers
is
that
if
they
want
to
build
something
fully
on
chain,
it's
not
really
scalable,
but
if
they
build
it
off
chain,
it's
not
really
decentralized.
E
So
it's
just
a
another
random
social
network
and
so
building
social
networks
that
are
decentralized
on
top
of
polybase
is
not
only
really
simple,
but
developers
get
to
actually
deliver
the
USP
of
being
decentralized
and
verifiable.
The
second
one
we've
seen
is
dynamic
nfts,
so
we've
seen
people
build
things
like
verifiable
credentials.
In
a
way
that
makes
it
really
simple
and
cost
efficient
to
update
the
metadata
of
credentials.
E
So,
for
example,
if
you
had
a
credential
where
you
need
to
update
the
data
behind
it,
every
once
a
second
or
every
minute,
you
can
do
that
on
polybase
without
doing
like
a
whole
ethereum
transaction
to
update
the
data
or
without
cids
changing
things
like
that,
and
so
it's
been
a
really
interesting
place
for
gaming,
nfts,
verifiable
credentials
and
we're
seeing
some
new
applications
of
spts
as
well
going
going
forward
another
and
the
last
kind
of
area.
E
I'll
mention
is
decentralized
exchanges,
so
we've
seen
dexas
want
to
build
exchanges
that
are
faster
and
cheaper
today.
Building
it
all
on
chain
again
is
extremely
slow
and
it's
expensive,
and
so
it
levels
limits
the
transactions
per.
E
Second,
on
exchanges,
we
kind
of
see
poly
bases
being
really
foundational
to
building
exchanges
that
are
open,
but
also
extremely
efficient
and
fast,
and
then
the
other
really
cool
thing
is
you
can
have
an
order
book
on
polybase
and
you
can
have
the
routing
and
the
matching
for
bids
and
apps
on
polybase
as
well,
and
that
can
actually
be
proven
with
a
ZK
proof
that
a
particular
order
came
in
and
it
was.
E
It
was
routed
through
the
routing
algorithm
in
a
fair
way
and
that's
something
that
we
haven't
been
able
to
do
with
off
chain
exchanges.
Ever
before
so
yeah,
that's
kind
of
some
of
the
use
cases
there
I
will
do
a
really
quick
demo
of
polybase
I'll.
Show
you
guys
the
Explorer,
which
is
kind
of
the
admin
console.
E
Okay,
so
this
is
our
Explorer.
The
first
thing:
you'll
notice
is
the
root
hash.
This
is
the
root
hash
of
the
roll
up
for
the
full
polybase
Database
Network,
and
so
this
is
how
you
can
kind
of
prove
that
the
data
that
we
say
that's
in
the
database
is
actually
in
there
we'll
be
actually
rolling
out
more
ways
to
validate
and
verify
this
proof
as
well.
Then
we
have
the
collection.
So
these
are
the
the
tables
that
that
developers
have
created
on
on
polybase
we'll
actually
go
into
the
studio.
E
You
can
see
a
bunch
of
my
test
collections
here,
we'll
kind
of
jump
into
the
verifiable
credentials,
and
here
you
see
the
schema.
So
this
is
an
example
of
how
you
would
write
the
code
for
a
polybase
collection.
The
schema
will
be
public,
so
anyone
can
come
in
and
verify.
E
You
know
what
the
database
how
the
database
is
created,
but
also
how
particular
rules
are
applied.
So,
for
example,
if
some
rules
are
this
public
directive
on
the
traits
collection
basically
says
anyone
can
read
these
trades,
it's
it's
a
publicly
accessible
table
and
then
these
would
be
The
Columns
of
a
particular
table.
So
you
can
kind
of
see,
there's
straight
type
value
public
key
as
well.
E
Here
again
we
have
a
public
directive
on
the
verifiable
credentials
metadata
and
then
down
here.
We
have
something
interesting
called
a
call
directive.
You
can
set
that
to
a
particular
public
key.
E
You
can
set
that
to
token
ownership,
there's
different
ways
to
delegate
the
permissions
here,
but
what
this
means
is
that,
if
I
take
this
out,
this
basically
means
that
only
a
person
holding
this
you
know
being
able
to
sign
with
the
private
key
that
corresponds
to
its
public
key
is
able
to
add
an
attribute
to
the
metadata
here,
and
so
we
basically
go
in.
We
check
the
public
key
equals
the
signers
key
if
it
doesn't
permission
denied.
Otherwise
we
can
push
the
attributes
into
the
array
here.
E
What
this
does
is
allow
us
to
build
up
really
complex
permissioning
models
that
are
publicly
verifiable,
publicly
accessible
and
then
kind
of
build
these
autonomous
systems
on
top.
E
A
It
seems
like
it's
a
very
hot
space
right
now,
both
in
terms
of
expanding
the
capabilities
for
smart
contract
developers
to
these
more
sophisticated
Services,
and
also
just
in
general
kind
of
offering
that,
like
for
the
community,
they
want
to
build
more
robust
applications
and
those
sorts
of
things.
A
Do
you
see
from
your
team's
perspective
with
the
ZK
Roll-Ups
that
ZK
Roll-Ups
are
very,
very
popular
right
now
is
the
goal
to
sort
of
maintain
the
the
indexes
and
the
data
sets
sort
of
within
the
polybase
node
Network,
and
then
use
the
ZK
Roll-Ups
as
sort
of
a
layer
of
verification
for
the
users
in
case
they
need
to
verify
it
or
how
does
that
architecture
evolve
over
time?
For
you
guys.
E
Yeah
I
didn't
really
go
into
the
kind
of
the
architecture,
but
we
do
have
a
white
paper
that
goes
really
deep
into
it.
The
high
level
is
that
the
the
architecture
is
split
up
into
three
sections.
E
The
first
is
of
actual
storage
layer
right
now,
polybase
is
running
all
the
nodes
that
actually
store
the
data,
but
we're
building
a
plugable
system,
and
so
you'll
actually
be
able
to
store
data
on
file
coins,
store
data
on
private
databases
and
then
the
the
second
part
of
the
architecture
is
the
roll
up
which
actually
takes
that
that
data,
whether
it's
public
or
private,
and
then
applies
the
ZK
roll
up,
as
well
as
the
permissioning
system
to
that
data,
and
that's
what
brings
public
verifiability
to
even
private
data
and
then
the
last
part
is
the
peer
peer-to-peer
network
of
clients,
and
so
you
might
have
someone
running
a
polybase
like
SDK
client
on
their
phone
on
a
react
web
app
anywhere.
E
You
know
server
side
as
well
and
that's
all
run
on
a
peer-to-peer
Network,
and
so
all
the
data
is
actually
being
shared.
Actually,
we
use
lib
P2P
under
the
hood
and
that's
actually
live
protocol
lab,
so
it's
ended
up
investing
but
that
Network
basically
allows
for
sharing
without
any
intermediaries.
A
Fantastic
I
love
it.
What
does
so?
This
is
helpful
because
you
know
a
lot
of
us
You're
Building,
these
Primitives
in
the
space.
A
lot
of
other
folks
are
building
decentralized,
compute,
decentralized
databases,
decentralized,
storage,
one
of
the
things
we
think
about
a
lot
is
workloads
and
growth
and
and
sort
of
driving
user
adoption.
Have
you
guys
learned
any
lessons
about?
E
Yeah
I
mean
our
North
Star
has
been
like
as
a
developer.
You
shouldn't
have
to
think
about
any
web3,
specific
Concepts
or
Technologies
in
order
to
use
polybiz
and
I.
Think
that's
a
good
North
Star
for
everyone
to
kind
of
keep
in
mind
which
is
most
of
the
web
through
developers
that
are
going
to
be
in
the
ecosystem.
Let's
say
five
years
from
now
are
basically
Web
Two
native
people
right.
E
So
what
we've
done
is,
for
example,
the
polylang
language
that
you
saw
is
basically
JavaScript
with
a
couple
decorators,
the
apis,
basically
exactly
the
same
as
Firebase
which
every
web
2
developer
knows
how
to
use
already.
So
our
Focus
has
been
very
clearly
on
the
developer,
experience
being
as
simple
as
possible,
with
the
minimal
amount
of
new
Concepts
to
learn.
Well,
we've
actually
seen
has
been
a
problem
in
web
3
infrastructure
is
that
I
think
projects
end
up
building
really
generalizable
generic
Solutions.
E
But
then
the
concepts
are
really
hard
to
understand
and
from
a
developer
perspective,
it
becomes
really
hard
to
use
and
integrate,
and
then
it
takes
a
long
time
to
kind
of
backfill
the
the
usability
side
of
it,
and
so
we've
kind
of
taken
the
opposite
approach,
which
is
like
make
the
interface
extremely
familiar
and
then
do
the
magic
behind
the
scenes
and.
A
A
A
Is
there
is
Benjamin's
asking
there's
some
work
he's
doing
with
back
layout
bootstrapper
if
there's
some
time
at
the
end?
Yes,
absolutely
yeah,
so
Benjamin,
let's
absolutely
save
some
time,
I
think
we
might
have
a
little
bit,
which
is
great
because
I
love,
you
know,
books
Yep.
This
is
time.
A
Let's
use
it,
let's
find
out
what
people
are
working
on
the
space
and
then
just
to
just
to
recap:
if
anyone
else
has
questions
for
Sid
in
the
apply
base
team
before
we
we
transition
and
if
not
I,
do
have
one
last
question
for
you
guys
Sid
before
we
let
you
go
I'll
just
give
everybody.
D
A
Half
second,
they
have
questions,
if
not
obviously
we'll
tag
you
guys
in
the
in
the
the
slack
channel
for
computer
data
working
group,
so
people
can
ask
additional
questions.
One
follow-up
for
you,
though,
Sid
before
we
let
you
go
is
if
folks
want
to
get
involved
in
polybase,
they
want
to
start
using
the
product.
What's
the
best
way
for
them
to
get
onboarded,
do
they
go
to
the
website?
Should
they
reach
out
to
you?
What
should
they
do?
There.
E
Yeah
totally
our
test
match
is
live,
it's
free,
you
can
play
with
it,
create
Collections,
and
all
that,
if
you
have
specific
use
cases,
you're
interested
in
using
definitely
reach
out
to
me
say
that
polybase.xyz
we've
been
working
both
like
top
down
and
bottom
up.
So
a
lot
of
developers
coming
in
naturally,
but
also
we
have
a
lot
of
Partnerships
we're
building
with
with
Enterprise
for
specific
specific
applications.
I
would
love
to
hear
what
you're
building.
A
Fantastic
all
right,
thank
you!
So
much
Sid
jockey
appreciate
you
guys
immensely
love
the
product
and
and
more
to
come,
we'll
we'll
post
some
notes
Here
in
the
in
the
in
the
slack
Channel
shortly
thanks
for
having
us
absolutely.
We
appreciate
it
and
then
Benjamin,
hey
Benjamin.
A
D
Sure
yeah
I'm
not
super
used
to
talking
about
it
yet
so
a
little
bit
of
patience
will
be
certainly
appreciated,
but
basically
I
sort
of
initially
started
writing
ansible,
tooling,
to
deploy
back
with
Yao
clusters
or
not
necessarily
clusters,
yet
but
groups
of
of
individual
faculty
on
nodes
and
as
I
was
sort
of
getting
into
it.
I
started
thinking
about,
like
oh
it'd,
be
cool.
D
If
this
was
like
sort
of
like
a
first
class
Citizen
Way
of
deploying
Babu
yeah
like
it
would
be
cool
if
this
was
the
way
to
do
it,
but
I
quickly
realized
that
sort
of
starting
an
install
guide
with
okay.
Now
you
have
to
install
ansible
okay.
If
you
want
to
install
the
latest,
ansible
you've
got
to
use
pip3.
If
you
don't
have
hit
three
yet
you've
got
to
install
pip3.
D
You
know
it
starts
to
become
too
many
steps
and
I
started
thinking
about
it.
D
I
was
like
What
if
I,
just
as
a
really
dumb
idea,
wrote
just
like
a
bootstrapper
that
would
just
install
ansible
install
pip3
if
it
had
to
in
order
to
install
ansible,
and
then
it
would
check
out
the
Playbook,
The,
ansible,
Playbook
and
related
roles
and
sort
of
set
everything
up
for
you
and
then
run
the
playbook
for
you
and
with
the
ultimate
goal
of
just
being
that
it
would
just
be
like
one
come
on
where
you
just
fetch
the
the
script,
and
then
you
run
it
right,
and
it
should
ask
you
questions.
D
It
should
walk
you
through
and
sort
of
Hold
Your
Hand
a
little
bit
actually
I
can
probably
even
show
off
just
for
people
who
are
sort
of
unfamiliar
with
it
just
quickly
get
pulling
right.
Now,
if
you
guys
mind
if
I
share
my
screen
quickly.
Yes,.
D
It
is
in
a
hell
of
a
broken
state
right
now,
so
I'm
not
going
to
be
able
to
share
much
but
I'm
in
the
middle
of
a
massive
rewrite
at
the
moment.
So
refactor.
D
Okay,
so
hopefully
you
can
see
that,
but
basically
back
boot.
The
idea
is
that
it
sort
of
starts
off.
With
this
nice
splash
page
I
took
like
these
emojis
from
the
home
page,
so
the
emojis
that
sort
of
make
up
all
you
know
all
the
ways
that
you
can
use
backward.
You
know
I
thought
there's
it
was
just
really
cool
to
just
like
a
banner
out
of
them.
D
So
that's
what
I
did
and
then
yeah,
so
basically
I
hope
it
still
works
on
Mac,
OS
I
actually
haven't
tested
this
since
I
changed
it,
but
it
gives
you
like
a
menu
option.
You
can
sort
of
walk
through
it
and
it
will
install
whatever
you
need
to
get
a
back
of
your
client
going.
D
So
in
this
case,
if
I
just
run
this,
this
is
going
to
fail
and
what's
kind
of
neat
is
the
reason
it's
failed
is
because
I
didn't
like
have
a
password
to
run
sudo
and
it's
nice
enough
to
sort
of
tell
you
hey.
D
You
know,
go
ahead
and
try
running
that
yourself,
and
so
then
the
user
can
hopefully
run
that
themselves
and
quickly
figure
out
what
the
issue
is.
So
it's
really
really
friendly
as
a
tool.
The
sort
of
end
game
of
the
tool
is,
it's
intended
to
be
a
tool
for
actually
spitting
up
entire
clusters
and
doing
all
the
cloud
stuff
as
well.
D
So
one
of
the
things
that
I
did
today
was
I,
got
Cloud
deployment,
finally
working
and
having
it
actually
provision
and
Destroy
digitalization
droplets,
so
it
soon
will
be
capable
or
is
already
capable,
but
just
I
haven't
wrapped
it
into
back
boot
itself.
Yet
of
deploying
arbitrary
numbers
of
digitalization
droplets,
instrumenting
them
across
potentially
across
regions
at
some
point
and
then
tearing
them
down
at
the
end.
So
the
future
of
back
boot
is
going
to
be
you'll,
have
one
command
that
you
can
run
where
you?
D
So
if
you
have
like
a
temporary
workload,
really
really
handy-
and
it's
already
pretty
capable
of
that,
I
actually
got
nearly
banned
from
the
dilution
API,
because
I
tried
to
spawn
450
droplets
at
once
intentionally
to
try
and
push
it
because
I
I
have
a
limit
of
500
on
my
account.
D
So
I
was
like
I
will
use
all
of
that,
and
it
turns
out
that
that
was
a
bad
idea
and
the
really
fun
part
is
that
I
ended
up
with
78
droplets
still
running
with
my
account
and
because
there
was
no
API
I
had
to
delete
them
all
by
hand
using
the
web
UI
which,
if
you've
ever
deleted
digitalization
droplets,
you
know,
is
extremely
painful.
D
So
I
guess
are
there
any
any
like
questions
about
like
motivations
or
like
anything
like
that?
Like
is
anyone
curious
about
anything
about
back
boot
and
is
still
fairly
new?
It's
still
very
much
in
development,
but
it
was
just
if
anyone's
curious.
B
I
I
have
a
little
less
of
a
question
more
of
a
comment
and
an
encouragement.
I
think
food
is
really
important
to
the
ecosystem.
I
was
just
having
an
interesting
conversation
with
David
that
was
about,
like
I,
get
a
lot
of
inbound
from
data
centers
that
want
to
diversify
their
compute,
offering
to
Scientific
workflows
and
man.
They
have
weird
Hardware,
sometimes
and
weird
kind
of
cluster
topologies,
and
you
know
having
something
that
can
flexibly
and
in
a
decentralized
way
mobilize.
You
know
thousands
of
gpus
that
are
just
sitting
there
doing.
B
Nothing
could
really
open
up.
You
know
compute
for
science,
I,
think
it's
it's
such
an
important
thing
and
I
I
can't
wait
to
follow
your
progress.
D
Awesome,
thank
you.
Well
I,
guess
I'll
I'll
sort
of
make
a
quick
comment
on
that
actually
and
then
probably
leave
it
there
because
I
think
we're
just
about
at
time.
Another
thing
we
have
a
few
minutes
left
actually
yeah
so
that
that's
sort
of
the
motivation
behind
the
inventory
feature.
D
So
it
just
sort
of
natively
supports
passing
through
an
inventory
file
to
ansible,
which
is,
you
know,
very,
very
easy
to
implement,
but
it
means
that
you
can
sort
of
arbitrarily
decide
how
you'd
actually
like
to
lay
out
your
your
topology
in
your
cluster.
D
E
D
It
so
extreme
flexibility
is
like
sort
of
a
very
much
a
built-in
feature
and
and
part
of
the
philosophy
of
the
tool
is
to
be
the
most
flexible
way
to
deploy
Bakugan,
but
also
be
the
simplest.
So
you
know
it's
going
to
do
everything
from
I
just
want
to
install
the
CLI
tools
on
my
machine
to
I
want
to
install
it
across.
You
know
a
thousand
nodes
and
create
like
an
a
thousand
node
background
cluster.
D
Potentially
a
private
one,
so
yeah
GPU,
supports
planned
mounting,
like
arbitrary
file
systems
is
also
planned
so
like.
If
you
want
to
do
something
like
part
of
my
motivation,
is
I
have
a
bunch
of
AMD
epic
CPUs
at
home
or
computers,
I
guess
and
I
decided.
D
I
was
like
well
I
would
love
to
run
back
with
us
or
nodes
and
run
some
of
the
most
powerful
baclion
nodes
that
I'm
I
think
it
will
exist
right
now,
but
I
was
too
lazy
to
do
it
by
hand
and
so
I
sort
of
decided
that
the
lead,
the
because
I
was
too
lazy
to
do
it
by
hand.
I
would
write
a
tool
that
would
automate
it
all.
D
Sometimes
if
anyone's
familiar
with
log
stash,
the
original
author
of
it
used
to
say
that
he
practiced
hate
driven
development,
and
you
know
he
didn't
mean
it
as
a
pejorative.
He
sort
of
meant
it
was
love
and
I
I
do
identify
with
that.
Like
sometimes
I,
just
I
I
see
a
problem
and
I'm
like
I.
I
would
like
this
to
not
be
a
problem.
Sorry
I'm
really
really
excited
I,
think
we're
presenting
back
boot
at
the
the
Cod
Summit,
the
computer.
D
Every
data
Summit
on
May,
9th
and
I
suspect
it's
going
to
be
pretty
far
along
by
then
I'm
hoping
to
be
basically
feature
complete
by
the
time
we
announce
it.
D
E
D
B
I'm,
a
gentle
and
friendly
person
to
work
with
and
honestly,
actually
we're
kind
of,
doing,
validation,
testing
on
the
first
node
that
we
got
from
this
data
center
company,
which
is
like
eight
thirty
eighties,
okay,
lovely
lovely
envelope
and
and
they
like,
they
got
thousands
of
them.
But
a
couple
weeks
down
the
road
whenever
it
works
like.
If
you
would
like
to
you,
know
see
if
you
can
scale
back
boot
to
a
couple
thousand
gpus
we'd
be
really
excited
to
make
that
connection.
D
Yeah
definitely
possible,
it's
just
going
to
be
a
matter
of
basically
making
the
time
to
write
the
support
for
it.
Anything
that
that
bakliyal
supports
natively
back
boot
can
also
support
it's
just
making
sure
we'd
sort
of
do
it
in
like
a
way
that
makes
sense
and
doesn't
like
burden
the
user.
D
So
you
know
something
where
ansible
sort
of
instruments,
the
machine
detects
all
the
gpus
that
are
in
the
machine
and
then
sort
of
configures
itself
and
then
configures
baklielle
to
utilize.
Those
gpus
I
think
that's
that's
sort
of
fairly
simple.
D
It's
just
going
to
be
it'll,
probably
just
be
behind
priority
of
getting
like
Cloud
deployment
and
a
few
other
things
going
other
if
it's.
If
this
is
in
a
matter
of
weeks,
I
might
just
like
get
the
digital
relation,
support,
going
and
sort
of
leave
that
as
the
only
plugin
for
now
we're
trying
to
support,
like
all
the
clouds,
basically
like,
not
all
of
them,
but
all
the
major
ones.
D
But
of
course
that's
gonna
be
a
ton
of
work,
so
maybe
the
priority
is
going
to
be
to
support
gpus
once
we
get,
you
know
just
the
one
digital
ocean,
Cloud
support
landed
in
it,
but
we'll
play
it
by
here
and
you're
always
welcome
to
DM
me
or
we're
talking
back
with
you
and
see
where
it's
going
or
help
steer
it,
because
it's
an
open
source
project
so.
D
Mit
MIT
license
and
I'm
open
to
other
licenses
too,
but
that's
pretty
much
do
what
you
want
with
it.