►
From YouTube: Delta Lake Community Office Hours: December 15, 2022
Description
Let's Time Travel in 2022 to review Delta Lake!
Want a nice recap of the Delta Lake Project from this year? Join us for the next Delta Lake Community Office Hours where we will highlight the work done this year by the community across Delta Rust, Delta Sharing, and Delta Core. Please bring your Delta Lake questions to this live event and provide your feedback on the project roadmap!
Quick Links
Read Our Newest Blog Post: https://delta.io/blog
Join us on Slack: https://go.delta.io/slack
Delta Lake Releases: https://github.com/delta-io/delta/releases
A
Hi
everyone
we
are
broadcasting
live
from
the
Delta
Lake
welcome
to
Delta
Lake
Community
office
hours.
This
will
be
our
last
office
hours
for
2022..
We
love
seeing
people
joining
from
across
the
world
since
we
have
Global
community.
So
please
let
us
know
where
you
are
joining
from
and
we
would
be
happy
to
see
where
our
audience
is
from
myself.
I
am
in
San
Francisco
Florian.
How
about
you.
A
Awesome
awesome
all
right,
so,
let's
get
started
today
we
have
a
special
edition
of
office
hours
and
in
today's
session
we
are
doing
a
time
travel
of
our
Delta
Lake
Project
in
2022.
Joining
me
today
are
the
two
other
long-term
Delta
Lake
contributors,
Florian
from
back
market
and
Will
from
databricks.
So
why
don't
I?
Let
them
introduce
themselves
Florian.
Do
you
want
to
go
first.
B
Yes,
thank
you
Vinny,
so
I
am
Florian
ballet
I'm
working
at
Black,
Market
and
right
now,
I'm
a
Staff
data
engineer,
I'm
working
on
with
the
data
tribe
on
many
topics
and
I'm,
also
a
contributor
to
the
Delta
RS
project.
A
C
Yeah
sure
hi
everyone,
my
name,
is,
will
gurton
I'm
a
senior
specialist
Solutions
architect
at
databricks,
been
with
the
company
for
about
four
years,
I'm
an
open
source,
Delta
sharing,
committer
and
evangelist
I'm
happy
to
speak
with
you
all
today.
A
Thank
you
will
this
is
super
exciting
I
get
to
join
and
get
joined
by
Florian
and
will
so
and
myself
I'm
Vinita
as
well
developer
Advocate
at
databricks
I
have
been
a
part
of
daily
project
since
its
Inception
and
I
love
bringing
the
community
together
and
work
on
some
of
the
advocacy
efforts
for
the
project.
A
I
have
heard
a
lot
of
databricks
customers
build
their
pipelines
on
a
participate.
There's
like
another
project,
so
very
happy
to
be
here
in
today's
AMA
session.
We
will
provide
the
project
highlights
and
we
also
want
to
keep
our
regular
format,
which
is
answering
your
questions,
and
we
also
want
to
know
what
are
your
big
ideas
for
the
project
for
2023.
A
A
So,
for
those
who
are
new
to
the
project,
let's
start
with
what
Delta
lake
is
Delta
lake
is
an
open
source
project
that
enables
building
a
lake
house
architecture
on
top
of
your
existing
storage
systems
like
S3
ADLs,
TCS
hdfs,
and
it
offers
many
key
features,
including
time
travel,
asset
transactions,
scalable,
media
and
more,
which
are
required
for
making
our
data
engineering
dreams
come
true
and
with
compute
engines,
including
flank
High
Kafka,
Presto
sparked
Reno,
Etc
and
aeps,
like
Scala,
Java,
rust,
Ruby
and
python.
A
Delta
Lake
enables
building
the
lake
house
architecture
on
any
platform
and
any
environment,
and
you
know
for
those
who
don't
know
how
Delta
lake
is
being
used.
It's
it's
actually
used
to
solve
a
lot
of
challenges,
some
of
which
are
around
malformed
data,
ingestion,
difficulty,
building
data
and
compliance
or
issues
modifying
data
for
data
data
capture.
A
Your
data
can
be
from
like
multiple
different
variety
of
sources
and
it
is
either
getting
ingested
either
as
a
batch
or
maybe
like
you
know,
real-time
stream
processing-
and
you
know
on
my
screen:
I-
am
showing
you
the
end-to-end
architecture,
which
is
a
lake
house
architecture
that
allows
users
to
run
like
a
lot
of
data
and
AI
applications,
and
your
data
needs
to
exist
only
once
and
you
can
run
like
a
lot
of
different
newspapers
on
it
and
to
learn
more
about
the
project.
Definitely
we
have
a
website.
A
We
have
GitHub
so
head
over
to
delta.io
to
learn
more,
but
that
I
want
to
quickly
take
you
through
a
journey
of
highlights
and
piece
of
innovation.
A
So
you
can
see
that
this
is
the
release
highlights
and
how
fast
the
pace
of
innovation
has
been
within
three
years.
Today
we
are
at
the
lake
2.2
and
it's
been
a
long
journey
to
get
there.
The
slide
is
tagged
with
Theta,
so
you
can
actually
see
what
I'm
talking
about
so
bear
with
me
as
we
take
a
quick
look
back
at
where
we
have
been
and
how
figure
out
how
we
all
got
here.
A
Delta
Lake
was
first
open
sourced
in
April
of
2019,
and
that
was
the
first
version,
which
was
pretty
exciting.
It
included
asset
transaction
and
schema
management.
It
included
streaming
and
batch,
but
we
didn't
stop
there.
We
quickly
added
support
for
DML
commands
and
vacuuming.
We
added
kala
and
python
apis.
Next,
we
improved
compaction
and
concurrency.
It
was
made
possible
to
convert
RK
tables
into
Delta
using
SQL.
Only
other
things
that
we
added
that
were
added
in
the
next
version.
A
Are
things
like
describe
history
which
lets
you
understand
how
your
table
has
been
evolving
over
the
period
of
time
in
0.7,
we
added
the
support
for
a
whole
bunch
of
different
engines
like
Presto
Athena
and
then
in
version
0.8.
A
lot
of
work
went
into
adding
merge
and
other
features
like
parallel,
delete
and
vacuum.
Soon
enough,
we
released
the
version
1.0
in
May
2021,
with
features
like
generated
columns,
multi-cluster
right.
A
This
was
a
very
big
release
as
it
included
a
lot
of
this
features,
as
well
as
Cloud
Independence,
which
made
Delta
run
everywhere.
So
that
was
pretty
exciting
and
then
it
was
impressive
to
see
more
and
more
Community
contributions
which
led
us
to
quickly
release
features
like
Delta
Standalone
and
that
again
was
a
very
big
release
because
it
unlocked
a
whole
new
possibility
for
our
other
ecosystem
projects
to
work
with
at
the
lake
and
soon
Delta
Lake
connectors,
like
Apache
Flink,
are
Presto.
A
Apache
Pulsar
were
released
towards
end
of
the
last
year,
uses
of
Delta
Lake
wanted
more
performance
and
functional
features,
and
the
community
heard
so
we
actually
focused
on
releasing
features
like
optimized
data,
skipping
S3,
multi-cluster
rights,
column
renaming,
among
others.
In
the
first
quarter
of
this
year.
Flash
back
to
our
data
and
AI
Summit
in
June
2022
Delta
Lake
actually
announced
the
version
2.0.
This
Milestone
version
enabled
our
community
to
benefit
from
all
the
Delta
Lake
Innovations,
with
Delta
Lake,
apis
being
open
sourced.
A
A
These
features
dramatically
improved
data
engineering,
performance
and
manageability
compared
to
previous
versions
that
we
had
released,
and
then
we
added
two
more
releases
recently
version
2.1,
which
includes
support
for
Apache
3.3
streaming
feature
like
trigger
now
trigger
available,
now
describe
detail
in
Scala
and
python
apis
running
operation,
metrics
from
SQL
and
recently
literally
last
week,
we
released
version
2.0
2.2,
which
has
lots
of
query,
features
like
improving
the
performance
of
queries
containing
a
limit,
Clauses
and
significant
reduction
and
query
time
for
aggregate
queries
because
they
now
just
needs
to
read
the
table
met
earlier.
A
We
also
added
support
for
collecting
file
level
statistics
as
a
part
of
column
to
convert
to
Delta
command,
and
we
have
made
significant
significant
improvements
in
metrics
and
monitoring
and
so
on,
and
the
project
is
continuously
adding
more
connectors.
It's
commitment
to
support
you
know
wide
variety
of
ecosystem
connect.
Ecosystem
companies
makes
it
easy
to
build
ETL
pipelines
with
various
Technologies.
Okay.
A
That
was
a
lot
of
features
and
the
pace
of
innovation
in
desolate
project
has
now
made
it
one
of
the
most
widely
used
lake
house
format
in
the
world
in
2022,
Delta,
Lake,
monthly
downloads,
skyrocketed
from
685
000
to
over
10.5
million
in
monthly
downloads.
A
Completing
this
work
to
open
source
you,
you
saw
right
a
lot
of
deltaic
readers,
while
tens
of
thousands
of
organizations
are
running
in
production
was
no
small
feat
and
due
to
this
activity
and
growth
in
unique
contributions,
the
Delta
Lake
Project
logged
and
astounding
63,
633
percent
increase
in
contributor
strength
over
the
last
three
years.
Now
we
have
more
than
218
contributors
in
Delta
lake
and
a
lot
of
Delhi
is
powering
a
lot
of
production
use
cases
community
and
it's
many
many
contributors
for
making
this
possible.
A
These
are
just
some
of
the
logos
that
I'm
highlighting
here
there
have
been
like
many
more
people
working
from
different
organizations,
so
I'll
pause
there.
This
actually
perfectly
leads
us
to
our
next
segue,
because
we
have
two
awesome
contributors
here:
Florian
and
Bill
who
are
gonna,
provide
highlights
into
two
other
sub
projects,
which
is
Delta,
rust
and
Delta.
Sharing,
so
let
me
see
if
there
are
any
questions,
there's
the
question:
I
want
to
clone
a
Delta
table
with
Delta
logs
for
Disaster
Recovery
deep
tone
will
be
helpful.
A
Yes,
that
is,
and
we
actually
know
about
it-
we
are
working
on.
You
know
making
that
loan
available
in
our
in
2023.
So
please
stay
tuned.
Thank
you
for
bringing
that
up
yeah
and
please
definitely
ping
your
questions
on
our
chat.
We
will
be
happy
to
take
those
questions,
so
I
will
go
next.
Florian.
B
Thanks
so
Delta
rest
is
a
native
roast
delivery
for
the
college,
which
provides
bindings
into
Python
and
Ruby.
For
for
the
moment,
this
Library
provide
low
access
to
Delta
table
in
Rust,
which
is
the
core,
and
it
can
also
be
used
with
different
data
processing
Solutions,
for
instance,
polar
or
either
SDK
panda.
B
So
you
have
the
part
of
the
rust
core
which
have
different
layers,
but
we
used
many
different
dependencies,
Tokyo
Object,
Store,
Target
and
arrow,
and
we
create
a
binding
with
python
using
material
and
the
other
other
Library
dependency
in
Python.
So
if
you
would
like
to
use
Python
with
Delta
rust,
you
just
have
to
peep
install
Delta
Lake
from
Pi
Pi,
and
if
you
want
to
use
rust,
you
can
use
cargo
install
Delta
Lake
to
have
only
the
crate.
B
We
use
object
slot
to
have
a
good
way
of
accessing
Delta
tables
in
the
cloud
provider.
So
you
can
access
to
the
local
disk,
but
you
can
also
access
to
S3
or
cloud
storage
for
gcp,
for
instance.
So
you
have
everything
into
the
GitHub
repository
page,
but
you
can
also
find
the
python
package
inside
bypi
in
Delta
Lake
and
on
the
crate.io
for
the
deltaic
rest.
B
We
can
move
to
the
next
slide
so
for
this
year
many
release
10
I,
think
we
have
a
major
one
with
rust
0.5
and
we
are
also
publish
a
lot
of
different
release
in
Python.
We
can
separate
the
release
between
the
rest,
crate
and
the
python
package.
We
deploy
many
things
into
python
because
we
fix
a
lot
of
issues
and
it
would
like
to
be
really
fast
on
this
one
to
make
sure
that
everyone
could.
You
could
use
a
Delta
like
python
package.
On
the
feature
side,
we
improve
the
protocols
report
for
reading
operations.
B
It
means
that
we
collect
more
metadata
inside
the
transaction
log.
If
I
take
one
example,
it
will
be
to
access
to
the
test
statistic
written
into
the
transaction
log
to
make
it
available
in
Delta.
Rest
too,
we
unify
and
improve
the
object,
storage
service
support
because
it
was
not
the
case
before
we
rely
on
many
different
crates
to
do
so,
but
we
use
right
now,
Object
Store,
which
is
a
good
way
for
manage
different
object,
storage
services.
B
We
have
the
first
experimental
writer
at
the
moments,
he's
still
experimental,
but
we
are
working
intensively
on
it
to
make
it
production
ready.
We
support
the
optimized
command,
which
was
not
the
case
before
so
it
was
introduced
in
this
year
and
it
was
a
huge
Improvement
and
we
also
revamping
a
bit
the
integration
between
Python
and
Ro
by
using
pyro
and
to
make
things
more
Python
and
roast
closer.
B
When
we
deal
with
a
lot
of
different
calls
and
also
we
are
integrating
in
many
different
repository
just
to
use
to
name
a
few,
we
have
polar
iws,
hdk
Panda.
We
also
have
an
integration
with
data
Fusion
dusk
dagdb,
so
we
are
working
on
integrating
data
IRS
in
different
parts
of
of
the
the
data
tools
and,
for
the
start,
part
more
than
700
Stars
right
now.
So
it's
really
a
huge
Improvement.
It
was
a
free
Android.
B
B
I
would
like
to
take
the
time
now
to
thank
all
the
contributors
and
a
special
I
would
like
to
give
a
special
shout
out
to
will
Jones
and
Roberta
that
we
did
a
great
job
in
the
in
data
arrest
by
improving
the
road
scratch,
but
also
the
python
binding.
Thank
you.
It
was
really
an
incredible
achievement
and
a
lot
of
contributors
that
joined
the
community
and
build
with
with
us.
So
right
now
we
are
more
more
than
six
60
contributors.
B
A
Wow
that
was
like
a
lot
of
work,
Florian
and
I
know
that
yourself
has
contributed
a
lot.
So
thank
you
so
much
I
think
we
are
seeing
a
lot
of.
We
are
seeing
a
lot
of
momentum
in
rust
because
you
know
because
of
active
contributors
like
you,
so
thank
you
for
making
those
lives
easier
for
Delta
and
rust
users
awesome
for
everybody
who
joined.
A
Please
do
ping,
your
questions
live
in
our
LinkedIn
comment
and
we
will
be
taking
the
questions
next
up
we
have
will
who
is
going
to
share
updates
on
data
sharing
so
check
it
out
very
well.
C
Yeah
awesome,
thank
you
Vinnie,
so
another
great
project
under
the
Delta
Lake
ecosystem
I
think
has
the
potential
for
a
Major
Impact
is
Delta
sharing,
so
I
like
to
talk
a
little
bit
about
Delta,
sharing
kind
of
a
recap
of
what
has
happened
in
the
past
year
and
then
also
go
over
some
of
our
another
shout
out
to
some
of
our
contributors
I'm,
going
to
take
your
questions
at
the
end,
of
course,
so
the
way
that
I
always
describe
anything
is
I.
C
I
like
to
start
with
a
problem,
that's
being
solved
here,
and
the
problem
that
Delta
sharing
objectively
tries
to
solve
here
is
that
you
might
have
a
data
set
and
you
might
transform
it.
You
might
clean
it
up,
maybe
add
some
derived
columns
or
even
add
even
predictions
to
to
your
data
set,
and
then
you
might
need
to
share
that
data
set,
whether
it's
within
your
organization
or
maybe
across
your
organization,
even
across
database
accounts,
for
example.
C
So
one
way
that
you
might
do
that
today
is
to
leverage
the
native
Cloud
controls
and,
let's
face
it.
Nobody
wants
to
at
the
end
of
the
day
fiddle
around
with.
In
policies,
it's
difficult
it's
hard
to
get
Securities
approval,
it's
error-prone,
it's
just
not
a
scalable
solution,
and
that's
really.
What
Delta
sharing
project
aims
to
solve
here
is
to
simplify
sharing
your
data
sets,
whether
it's
in
your
organization
say
from
get
engineering
team
to
your
data
science,
team
or
maybe
across
your
organizations.
C
Maybe
you
sell
a
data
set
for
a
premium
and
what
Delta
sharing
does
is
it
defines
an
open
protocol
for
sharing
data
sets
and
exchanging
data
sets
in
real
time
securely,
and
the
data
architecture
is
very,
very
simple,
so
it
really
is
comprised
of
two
two
parts.
One
is
the
data
provider
part
and
what
the
data
provider
does
is
the
data
provider
will
basically
expose
one
or
more
data
sets
through
what
we
call
a
sharing
server
and
then
the
other.
C
The
second
part
of
the
architecture
is
the
data
recipient
that
data
recipient
can,
through
a
sharing
profile,
a
very
small
Json
file
that
has
an
endpoint
and
credentials
for
accessing
its
data,
set
they'll
simply
just
choose
whatever
connector
that
that
they
want
to
use
for
the
programming,
language
or
application
load
up
the
sharing
profile,
and
then
they
can
immediately
connect
to
these
data
sets
and
immediately
start
using
them
and
then
lastly,
the
one
thing
that
I
think
is
really
cool
about
the
the
Delta
sharing
project.
C
Is
that
none
of
that
data
traverses
through
the
data
sharing
server
at
all?
Instead,
that
data
sharing
server
is
really
responsible
for
compiling
a
list
of
pre-signed
URLs
file
URLs,
and
then
your
data
client,
your
the
data
recipient,
can
then
use
the
client
to
load
those
data
files
into
their
application.
So
again,
none
of
that
data
flows
that
that
did
a
sharing
server.
So
it's
a
very,
very
scalable
architecture
and
last
but
not
least,
there's
an
ever-growing
list
of
connectors.
C
So
if
you
wanted
to
go
to
the
next
slide,
I'll
talk
to
a
little
bit
on
that
some
of
the
key
Milestones
over
the
past
year.
So
over
the
you
know,
2022
we've,
the
Delta
sharing
project
has
released
over
five
new
releases
and
there
have
been
a
lot
of
major
features
that
have
been
added
into
there.
C
Some
of
the
some
of
my
most
favorite
features
that
have
been
added
are
spark
stream,
support,
time,
travel
change,
data,
feed
support
for
Google,
Cloud,
Storage
and
then
also
additional
apis
requiring
things
about
your
your
share,
metadata
your
table
version
all
the
metadata
about
your
data
sets
that
you're
sharing
and
then
last
but
not
least,
there's
it
also
increased
from
demand
for
downloading
the
product
as
well.
We've
seen
over
500
000,
unique
downloads
across
Pi,
Pi
and
Maven.
C
So
it's
a
huge
it's
an
ever-growing
community,
and
if
you
want
to
go
to
the
very
last
slide,
the
one
thing
I
like
to
touch
on
is
just
how
much
of
an
impact
that
has
happened
over
the
past
year,
I
counted
up
this
morning,
we
have
over
10
new
connectors
that
have
been
added.
There's
one.
That's
that's
currently
pending,
so
in
this
past
year
you
know
we
initially
started
with
just
Python
and
Apache
spark
connectors,
but
we've
also
added
10
new
connectors
from
java
mlflow
node.js
go
power,
bi,
Excel
R!
C
These
are
all
really
good
connectors,
so
it
gives
you,
as
you
know,
a
data
recipient
or
a
data
provider
kind
of
the
ability
to
share
your
data,
sets
across
any
number
of
consumers
in
a
format
or
a
language.
That's
that's
convenient
for
them,
so
there
have
been
a
ton
of
contributors
there.
There
are
actually
too
many
to
list
here,
but
just
wanted
to
give
a
quick
shout
out
it's.
It's
super
impressive
to
me
that
the
number
of
connectors
we've
added.
A
Wow,
that's
super
awesome.
Well
with
that,
there
is
one
question:
this
is
super
awesome.
By
the
way
there
are
so
many
connectors,
so
many
PRS
that
has
been
you
know,
raised
and
actually
closed
with
all
these
projects.
There's
one
question
around:
I
am
trying
to
create
a
near
real-time,
Delta
Lake.
For
my
application.
One
of
my
data
sources
is
implemented
as
near
real
time.
Delta
Lake.
Would
it
be
possible
for
me
to
use
Delta
sharing
to
stream
those
changes
from
a
CDF
into
my
spark
stream
and
then
into
Delta
Lake.
C
A
Awesome
Florian
there's
a
question
for
you.
Is
there
an
easy
way
of
transforming
an
arrow
schema
into
a
Delta
schema
in
Rust.
B
B
It
was
a
change
during
this
year.
We
we
transformed
the
arrow
schema
in
the
rest
binding,
and
it
was
not
the
case
before,
because
we
tried
to
translate
it
on
the
python
side.
But
right
now
everything
is
done
in
in
Rust
and
The.
Binding
with
buyer
row
is
also
supported
in
Rust.
A
C
Yeah,
so
that
that
really
goes
back
to
that
architecture
concept
where
I
was
I
was
talking
about.
So
none
of
that
data
actually
traverses
through
that
Delta
sharing
server.
So
you
don't
need
to
really
worry
about
scaling
that
too
much.
Instead,
it
really
the
there
is
no
limit
per
se
to
the
size
of
the
Delta
table
that
you
could
share.
It
really
comes
down
to
the
client.
C
C
If
it's
a
super,
super
huge
table,
then
it
kind
of
makes
sense
to
add
in
things
like
limits,
but
ultimately
now
there
is
no
limit
to
the
size
of
Delta
table.
You
can
share.
A
Got
it
got
it
awesome,
Florian,
there's
a
question
on
pandas
data
frame
from
AWS
S3.
Was
there
like
any
work
that
was
being
done
there
this
year.
B
And
a
lot
of
improvements
were
done
to
use,
for
instance,
the
profile
or
some
configuration
to
access
to
S3
in
different
ways.
It
was
not
the
case
before,
because
we
were
relying
on
only
on
the
ambient
variable
for
the
access
key
or
the
security
key
right.
Now
you
can
use
profile.
You
can
use
different
parameters
to
make
to
make
the
connection
between
Delta
rest
and
and
the
S3
Delta
tables.
A
Got
a
guard,
and
then
there
is
a
question
on
what
are
other
Integrations
that
are
coming
up
in
Delta
rest
project.
B
Oh,
it's
a
wide
question
because
I
think
we
can
use
Delta
arrests
in
many
many
different
other
tools.
The
last
one
was
regarding
Panda
integration
with
iws
SDK
to
just
to
make
sure
that
we
can
have
an
access
on
the
Delta
table
using
a
small
runtime
with
a
Lambda
on
Adobe
s.
So
it
was
a
new
milestone
for
us.
The
next
one
has
been
Polar
Polar,
IRS,
On,
The,
Rough
Side.
Maybe
the
next
one
will
be
another
another
tool.
B
We
don't
have
any
any
clue
right
now
about
the
next
integration,
but
we
would
like
to
improve
the
the
python
binding
into
different
data
tools
processing.
So
if
you
have
any
idea,
don't
hesitate
to
submit
an
issue
to
for
having
a
new
feature
request
about
one
topic.
A
Yeah
exactly
yeah
for
any
projects
for
that
matter.
Well,
there's
another
question
so
for
Delta
sharing
to
happen,
you
know
what
are
what
are?
What
are
some
of
the
security
security
aspects
that
Delta
sharing
protocol
takes
care
of,
for
example,
if
I
have
like
different
systems,
I'm
logging
into
AWS,
that's
where
I
have
my
tables,
but
that
they
are
not
in
Delta,
like
some
other
format
and
I
have
to
connect
to
Delta
Lake
like
does
it
integrate
with
AWS
security?
How
does
that
work.
C
You
know
previously,
if
you
were,
or
even
today,
if
you
were
to
use
cloud
native
controls,
your
data
recipient
or
data
team
that
needs
to
access
that
shared
data
set
has
to
worry
about
all
the
different
Cloud
security
protocols,
ultimately
that
that
kind
of
maintenance
is
shifted
into
the
data
provider.
C
So
it's
a
one-time
setup
essentially
and
you're,
essentially
giving
trust
to
that
sharing
server
to
assume
a
role
or
an
identity
to
pre-sign
all
those
data
files
with
a
basically
an
expiration,
a
short-lived
URL,
so
that
data
recipient
can
read
the
data
temporarily.
So
there's
a
data
recipient.
You
don't
really
need
to
be
too
concerned
about
the
different
Cloud
protocols,
security
protocols,
in
fact
you're
totally
abstracted
from
it.
You
might
be
joining
two
tables
together
from
say,
AWS,
S3
and
then
Google
cloud
storage
and
you
would
never
even
know.
C
Ultimately,
the
provider
would
take
care
of
setting
up
the
appropriate
credentials
across
the
data
sets
and
then
there's
simply
just
a
you
know:
default
access,
key
token
authentication
that
takes
place
between
the
their
recipient
and
the
sharing
server.
And,
of
course,
you
can
increase
that
as
well.
You
can
put
an
API
Gateway.
We
have
a
great
blog
that
just
came
out
into
the
Delta
website
that
details
that.
So,
if
you,
if
you're
curious
about
that,
I,
definitely
give
that
one
a
read
but
all
twins
with
data
recipient.
A
Awesome,
that's
right
that
there's
another
question
on
Iran
vacuum
command
and
it
again
successfully.
How
can
I
see
what
version
it
has
created
in
our
data
table,
so
I
think
every
transaction
that
you
do
with
Delta
Lake.
Either
you
update
just
one
row,
one
record
or
even
like
delete
perform
deletes
it
will.
A
It
should
create
a
new
version
for
a
new
version
entry
into
your
log
file
and
to
actually
view
all
those
details
you
can
use,
describe
detail
command
and
it
should
allow
you
to
see
the
time
changes,
the
operation,
metrics
and
all
of
those
commands
and
I
think
there
are
other
questions
about
like
if
I
delete
the
table
doesn't
get
deleted
immediately.
A
So,
in
that
regards
what
Delta
Lake
does
is
it
allows
you
to
it
allows
you
to
time
travel
in
case
if
that
delete
was
accidental,
and
that
way
you
know
it
saves
you,
like
a
burden
of
you,
know,
lose
income
Records,
so
it
whenever
you
perform
a
delete.
It
actually
does
a
soft
soft
retention
of
those
records
for
seven
days
and
after
seven
days
it
automatically
deletes
it
deletes
deleted.
A
However,
if
you
do
want
to
immediately
delete
perform
the
delete
action
and
have
it
remove
all
those
wires,
you
can
actually
specify
the
retention.
Duration
and
retention
can
be
zero
hours,
for
example.
Hope
that
helps
awesome
will.
Would
you
provide
us?
What
else
are
you
working
on?
What
are
what
are
some
features
that
you
will
be
working
on
in
2022.
C
I
can't
I
love
to
spill
the
beans,
but
then
so
I'll
give
you
a
little
bit
of
insight
So
currently
today,
the
Delta
sharing
protocol
only
allows
you
to
share
Delta
Lake
tables,
but
there's
a
possibility
that
might
that
protocol
might
expand
in
the
near
future,
another
one
that
I
think
I'm
really
excited
about,
or
just
some
more
connectors
that
are
being
added
to
the
ecosystem.
C
Again,
nothing
set
in
stone
here.
It
really
is
ultimately
up
to
the
community
to
provide
that
demand
and
we
kind
of
shift
development
that
way,
but
I'm
really
excited
I,
think
For,
an
upcoming
trino
connector
as
well
as
looker
as
well,
but.
A
What
will
that
do?
Bill,
don't
we
have
to
you,
know
connector
already
in
Delta
Lake
ecosystem.
C
So
we
do
that,
so
there
is
a
Delta
Lake
connector
for
Torino.
However,
this
connector
is
a
little
bit
different
in
that
it
it
will
understand
that
Delta
sharing
protocol
got.
C
A
Awesome
sauce,
that's
yeah,
that's
exciting
and
a
lot
of
since
we
got
a
lot
of
questions
that
are
on
you
know
what
is
coming
next.
We
are
not
done
here
and
let
me
also
quickly
go
back
to
my
screen
share.
A
We
are
working
on
like
a
huge
roadmap
and,
of
course
definitely
we
are
not
done.
The
community
is
constantly
working
on
the
new
features.
So
if
you
want
to
have
a
voice,
if
you
do
want
to
contribute,
please
do
head
over
to
our
GitHub
and
start
contributing.
There
are
some
cool
things
which
are
coming
up
like
clones.
I
spoke
to
Delta,
connector
identity,
columns,
things
like
that,
and
these
are
the
ways
to
engage.
A
You
know
you
can
actually
go
through
our
Delta,
like
slack
user,
a
slack
Channel
Delta
like
GitHub
YouTube,
channel
Google
group
and
several
others
start
with
good
first
issues.
We
have.
We
have
several
issues
that
are
identified
start
with
that,
if
you
don't,
if
you
don't
feel
comfortable
head
over
to
slack
Channel
and
you
know
start
asking
questions,
if
you
are
stuck
anywhere,
then
we
also
have
like
lot
of
issues
which
people
of
which
Community
actively
talks
about
participate
in
those
discussion.
A
If
you
have
idea-
or
if
you
have
a
code
that
you
have
made
work
in
your
own
environment,
you
can
contribute
that
and
you
create
a
cool
request,
so
I
would
be
I
would
be
excited
and
we
would
love
to
see
more
contributions
from
one
of
you.
So,
let's
see
you
know
what
we
get
from
you
all.
A
Great
all
right,
so
thank
you
so
much
Florian
and
will
for
joining
us,
and
thank
you
all
who
asked
questions
and
helping
us
run
this
event.
2022
was
super
awesome
as
we
saw
it
and
we
definitely
are
gonna
bring
more
excitement
in
2020.
So
thank
you
all
bye.