►
From YouTube: Delta Lake Community Office Hours
Description
Join us for the next Delta Lake Community Office Hours and ask us your #DeltaLake questions. Thanks!
A
B
Do
we
need
to
record
this.
A
B
Awesome
great
to
you
know,
get
started
here
in
just
about
a
few
minutes.
That's.
A
Right
so
we're
good
to
go.
We
have
a
bunch
of
people
are
active.
Yes,
so
hey
folks
on
linkedin,
we
are
actually
active
right
now.
So
we're
going
to
start
in
about
two
minutes
right
now,
you've
got
two
of
the
hosts.
Three
others
are
going
to
join
very
soon,
so
do
not
worry.
They
will
show
up
hey
ryan,
welcome
aboard
by
the
way
we
are
live
already
for
once
the
streaming
servers
work
perfectly
fine,
so
everything
we're
actually
live
on.
Linkedin
live
on
youtube
right
now,
as
we
speak,
so
welcome
aboard.
A
Scott
how's
it
going
buddy
hey,
we
are
live
already.
So
just
let
you
know
so
we
are
live
on
youtube
and
we
are
live
on
linkedin.
We,
the
everything,
worked
perfectly
for
the
streaming
service
for
once.
So
this
is
awesome
so
for
anybody
who
is
on
live
with
us
on
linkedin
and
on
youtube.
Please
ask
your
questions
right
now.
A
If
you
have
it
we're
gonna
start
off
with
a
few
questions
and
by
the
way
our
our
friend
qp
how
he
will
join
us
shortly
so
saying
that
vinnie
once
you
start
to
take
it
away,
maybe
with
introductions,
maybe
what
the
context
by
all
means.
It's
your
show.
B
Yeah
thanks
jenny,
hello.
Everyone
thanks
for
joining
our
first
delta
community
office
hours
for
2022,
we
are
joined
by
ryan
scott
jenny
from
databricks
and
qb
will
be
joining
shortly
from
script
at
this
time.
Why
don't
we
take
a
moment
and
just
do
some
introductions
so
ryan?
Why
don't
you
introduce
yourself.
C
D
Hi
everyone,
I'm
scott,
I'm
also
a
software
engineer
in
the
same
delta
ecosystem
team
as
ryan
here
at
databricks.
Recently,
over
the
past
six
months,
I've
been
working
on
various
ecosystem,
open
source
features
and
projects.
So
I've
worked
on
the
delta
standalone
writer,
which
is
a
single
jbm
writer.
D
To
give
you
a
sparkless
way
to
write
and
write
and
read
from
delta,
also
been
working
on
the
various
flink
sync
and
source
connectors
we've
been
building,
and
recently
I've
been
working
on
some
open
source
features
like
data
skipping
and
as
well,
working
with
the
community
members
for
multi-cluster
rights
with
s3.
It's
a
lot
of
exciting
stuff
and
happy
to
answer
any
questions
about
that.
B
That's
awesome
looks
like
a
lot
scott
thanks,
denny.
Why
don't
you
introduce
yourself.
A
Oh
thanks
a
lot
vinnie
hi
everybody.
My
name
is
denny,
I'm
a
long
time
spark
and
delta
lake
guy
here
to
talk
about
various
aspects
of
both
from
a
community's
perspective
and
also
some
of
the
integration
work
for
the
connectors
work
we've
been
doing
so,
but
I'll
definitely
definitely
want
to
have
ryan
and
scott
to
end
them
for
that
matter.
If
luke
2b
joins
qb
to
take
the
show
and
then
that's
it
for
me
back
to
you,
vinnie
thanks.
B
Jenny
and
myself,
I
am
a
developer,
advocate
at
databricks.
I
I'm
here
because
you
know
I
want
to
make
sure
that
you
have
your
questions
answered
in
any
anything.
You
are
building
with
delta
lake,
and
so
we
have
organized
this
panel.
Hopefully
you
get
your
questions
answered.
We
will
be
monitoring,
you
know
youtube
as
well
as
linkedin,
so
please
post
your
questions
there
yeah.
So
just
to
start
off
with,
like
you
know,
maybe
we
have
some
questions
from
alexander,
so
he's
asking.
D
B
Great
and
then
scott,
why
don't
you
give
us
an
update
for
like
I
know
you
are
working
on?
You
know
multi-cluster
rights,
so
why
don't
you
give
a
little
bit
of
information
around
like
what
issue
does
it
resolve
and
what
made
us
work
on
that
specific
feature.
D
D
So
one
of
our
awesome
community
members
had
the
idea
to
use
an
external
store
dynamodb
specifically
to
provide
that
mutual
exclusion
for
us,
so
they've
written
a
new
implementation,
a
new
algorithm
that,
throughout
the
writing
process,
essentially
grabs
a
lock
for
that
file
from
this
external
store,
completes
the
write
and
then
releases
that
lock
later
on
so
really
exciting,
and
I'm
hoping
that
once
it
gets
released,
we
can
get
some
awesome
feedback
on
how
people
think
think
about
the
project
and
how
they
use
it.
For
these
multi-cluster
rights.
B
So
what
can
people
expect
when
can
they
expect
this
writer?
At
this
feature,
scott.
D
The
next
couple
of
months,
we
hope
we're
still
we're
still
working
right
now,
like
these,
these
very
weeks
on
the
actual
project
and
then
on
the
pr
feedback
and
whatnot.
So
we'll
see
how
that
how
it
progresses
over
time.
B
Awesome,
thank
you,
scott
all
right
and
then
I
think
there
is
a
question
from
hubert
he's
asking
hi.
Are
you
planning
to
add
some
additional
indexing?
I
know
how
to
use
it
partitioning
and
z
ordering,
but
the
problem
is
that
then
come
someone
else
and
query
on
field
which
required
to
load
the
whole
data
set.
A
Oh
yeah,
actually
I
can
probably
tackle
that
one
a
little
bit
so
there's,
actually
I'm
gonna
actually
follow
with
hubert
a
little
bit
in
terms
of
the
the
first
part.
Is
that
are
you
talking
about
open
source
delta
lake,
or
are
you
talking
about
data
bricks
delta
like
right,
so
in
the
case
of
additional
indexing
with
databricks
delt,
databricks
delta?
It
right
now
includes
z,
optimize,
the
order
which
you
had
called
out,
which
is
great,
there's
also
the
bloom
filters
right.
The
capability,
that's
also
within
the
databricks
delta.
A
Now
saying
this,
we
are
in
the
process
of
publishing
our
delta
oss
roadmap,
okay,
and
we
did
want
to
give
you
all
a
quick
little
preview
or
quick
little
update
the
fact
that
we
actually
are
planning
to
open
source
portions
of
optimize
as
well.
So
the
details
are
gonna
start
showing
up
in
our
proposed
2022
h1
roadmap.
A
So
would
love
to
have
your
feedback
on
that
and
so
it'll
show
up
on
github
like
just
currently
there's
the
2021
h2
roadmap,
we're
going
to
close
that
up
pretty
soon,
mainly
to
release
the
2022
h1
roadmap
and
we're
gonna
call
out
some
of
the
features
based
on
the
feedback
we've
received
from
the
community
that
we're
currently
open
sourcing
so
just
wanted
to
go
ahead,
and
hopefully
that
helps
you
a
little
bit
with
your
question,
hubert
and
also
for
the
community
in
general
as
well.
A
B
Hey
qp,
you
joined
finally,
hey
qp.
You
know
thanks
for
joining,
why
don't
you
introduce
yourself
awesome.
E
Sorry
for
being
late,
there's
some
technical
difficulties,
hi
everyone,
I'm
qp.
I
work
as
script
as
an
engineer
and
I
am
mostly
working
on
the
delta
lake
rust
implementation.
B
So,
since
you
are
already
you
know
in
the
rhythm
qp,
why
don't
you
share
some
progress
around
the
you
know
what
what
features
are
being
released?
And
you
know,
status
of
the
rust
connector.
E
Yeah
sure
so
recently,
I
personally
have
been
working
on
a
formal
verification
of
our
s3,
this
multi-writer
implementation
in
rust,
and
it's
a
pretty
cool
project.
The
tool
that
we
use
is
called
state
right.
It
lets
you
formally
suspect
out
your
distributed
design
system
design
in
in
rust
through
state
machines,
and
it
can
basically
explore
all
possible
states
and
make
sure
that
every
possible
state
is
results
in
correct
behavior.
E
So
there
are
a
lot
of
other
great
contributions
from
the
community
as
well
thomas
and
robots
roberts
they're
pushing
on
a
new
version
of
the
azure
multi-rider
support
in
azure
and
robots.
It's
also
working
on
a
new
high-level
writer
api
that
that
would
be
the
foundation
for
us
to
implement
proper
sql
query
support
in
rust.
So
so
that
would
be
a
really
cool
way
when
it's
getting
done.
B
Got
it
got
it
and
thank
you
for
those
insights.
So
what
is
your
favorite
feature
that
you
are
working?
That's
keeping
you
up
at
night,
qp.
B
That's
great
to
hear
so
community
will
be
having
like
a
lot
of
new
things
to
try
out
awesome.
So
I
have
some
questions
on
linkedin
there's
a
question
from
vivek.
He
has
he's
asking
please
throw
light
on
data,
mesh
concept
or
architecture
and
how
is
it
related
to
delta
lake.
C
B
Yeah,
so
no
problem,
we
will
follow
up
on
that
and
there
is
actually.
A
So
all
right,
so,
first
of
all,
we
do
want
to
call
out
that
there
are
actually
lots
of
different
concepts
of
what
data
meshes.
Okay,
so
typically,
I
would
probably
follow
martin
fowler's
definition
of
data
mesh,
just
because
that's
probably
one
of
the
more
common
like
exam
premises
of
of
how
we're
supposed
to
do
it,
and
if
you
want
to
think
about
it
from
a
high
level,
the
idea
is,
instead
of
actually
having
a
quote-unquote,
centralized
location
for
all
your
data.
You
have
a
mesh
I.e.
A
You
have
multiple
locations
of
where
your
data
is
the
reality
when
it
comes
to
something
like
delta
lake
and
for
that
matter,
honestly,
any
data
technology,
any
data,
lake
technology,
any
database
technology,
so
I'm
not
actually
trying
to
promote
any
one
specific
thing.
In
this
case,
most
systems
are
one
form
or
another,
a
data
mesh
from
the
standpoint
that
you,
for
example,
an
oversimplification.
A
Your
sales
team
has
once
a
set
of
data
lakes.
Your
hr
team
has
another,
your
engineering
team
has
another,
but
they
interact
with
each
other
right
and
so
from
that
standpoint.
From
that
high
level
perspective,
there's
your
data
mesh.
Okay!
So
the
context
of
irrelevant
of
how
you
want
to,
like
you
know,
define
the
details
even
deeper
than
that
for
a
data
mesh.
The
reality
is
all
systems
when
it
comes
to
data
mesh.
You
do
care
about
the
transactional
reliability
of
that
data.
A
Okay
and
that's
where
delta
lake
comes
in
very
nicely,
because
most
people
are
trying
to
store
their
data
in
a
data
lake
because
of
either
the
volume
and
or
the
the
fact
that
the
data
sizes
are,
the
data
is
actually
has
very
flexible,
schemas
or
semi-structured
or
whatever
else.
Now
all
the
above
come
into
play,
and
so
delta
lake
allows
you
to
store
your
data
in
a
data
lake,
but
actually
provides
that
transactional
reliability,
and
so
that's
actually
really
important
in
any
data
mesh
architecture.
A
From
the
standpoint
that
you
want
to
make
sure
that,
for
sake
or
again,
sacred
argument,
hr
is
trying
to
access
sales
data
and
they're
doing
so
in
a
reliable
way
right,
in
other
words,
they're,
not
reading
dirty
snaps,
dirty
reads
of
snapshots
that
are
not
really
inconsistent
data
things
of
that
nature.
You
want
to
make
sure
any
of
those
systems
that
are
interacting
together
are
actually
interacting,
very
reliably,
and
so
the
the
two
actually
pun
intended
mesh
very
well,
but
by
the
same
token,
it's
it
always
is
down
to
how
you
implement
it.
B
That's
amazing
description,
danny.
Thank
you
for
that.
There's
a
question
from
bhushan
around
delta
live
tables.
What
is
the
frame
for
ga
and
I
guess
from
the
open
source
perspective
we
are
still
consider.
We
are
still
thinking
about
what
features
to
bring
in
the
roadmap.
So
please
vote
for
that
feature,
but
you
know,
but
if
you
have
question
in
terms
of
databricks,
would
anybody
from
the
panel
like
to
answer
this
question?
B
The
question
is:
what
is
the
time
frame
for
ga
for
delta
live
tables.
C
Yeah,
I
think
this
is
probably
not
a
good
channel
to
answer
or
basically
asking
there
are
live
table
questions
because
most
of
our
people
here
people
are
not
working
on
this
project.
So
it's
better.
You
can
try
to
talk
to
databricks.
Maybe
you
could.
I
think
that
would
also
have
like
a
formula
you
you
can
ask
questions
there.
B
B
A
So
the
the
context
I
think
you're
trying
to
ask
basically
is
basically
dynamic
partition
overrides
okay
in
delta
lake.
Now,
right
now,
as
of
delta,
like
1.1,
there
is
the
replace
square
that
allows
you
to
do
it
arbitrarily.
I
think
there
is
like
you
know.
This
is
where
ryan
and
scott
are
going
to
correct
me.
If
I'm
wrong,
there
are
plans
for
this,
but
I,
I
honestly
don't
know
what
our
current
timeline
for
that
is
right
now.
A
So
that
part
I
don't
know,
but
there
certainly
are
plans,
because
and
for
that
matter,
the
reason
why
I
did
call
out
the
upcoming
2022
h1
roadmap
on
github
is
because
we
would
love
you
to
go
ahead
and
provide
feedback.
We
did
actually
get
some
feedback
about
dynamic
partition
overrides
and
in
our
delta.
A
Like
github,
I
think
I
saw
a
couple
of
the
last
in
the
last
couple
months,
so
there
is
that
ask
as
it's
coming
up,
but
by
the
same
token,
there
have
been
other
asks
that
have
been
higher
priority
for
the
community,
such
as
the
flint
connector
and
the
presto
connector.
So
we've
been
focusing
on
that.
So
if
there
is
more
of
an
ask
for
for
dynamic
partition,
overrides,
please
do
chime
in
so
we
can
take
in
account
of
that.
As
we
start
building
up
the
roadmap.
B
Yeah
and
then
we
will
post
the
links
to
the
github
as
well,
so
that
you
can
vote
for
the
issue
or
maybe,
if
you
have
any
suggestions,
you
can
contribute
awesome.
Moving
on
to
our
next
question,
there's
a
question
around
cdc.
So
if
we
implement
cdc
message
from
rds
to
kafka
to
spark
and
save
them
into
delta
tables,
however,
people
have
different
understanding
on
ron
silver.
Gold
tables
want
to
get
some
more
official
suggestion.
B
A
I
can
actually
probably
tackle
that
if
it,
unless
anybody
else
wants
to
tackle
it,
okay,
so
all
right,
so
there's
two
quick
things:
there's
the
oss
delta
answer
and
there's
the
delta
on
databricks
answer
by
the
way.
Okay.
So
the
oss
delta
answer
is
when
you're
talking
working
with
cdc.
It
depends
on,
if
you're,
trying
to
use
delta
as
your
cdc
source
or
as
your
sync
now,
if
you're
using
it
as
your
source.
A
In
fact,
there's
a
very
good
tech
talk,
mind
you
I'm
in
it
too,
so
my
apologies,
that's
a
shameless
plug
here,
but
called
using
delta
as
your
cdc
source.
It's
paul
roon
he's
the
the
real
person
who
actually
knows
what
he's
talking
about,
but
I
happen
to
be
there
as
well,
so
again,
james
blog,
but
that
should
provide
you
some
context
in
which
we
also
describe
that
concept
of
bronze
silver
gold.
A
So,
if
you're
talking
like
from
a
kafka
topic
or
for
that
matter,
kinesis
as
your
event
hubs
or
whatever
else,
as
that
data
is
dumping
in
and
you're,
making
that
as
your
cdc
sync,
then
your
data
delta
table
actually
acts
as
a
very
good
table
to
basically
store
both
the
actual
fact
table
itself
and
also
the
changing
in
history
I.e.
The
inserts
updates
deletes
the
actual
actions
that
you
want
to
go
do
all
right
in
the
case
of
databricks
delta.
A
We
also
include
a
change
data
feed
cdf,
which
actually
provides
itself
generates,
that
action
table
that
change
table,
that
I
was
just
referring
to
automatically
for
you
and
so
the
con,
and
if
you
take
a
look
at
actually
how
cdf
works
in
a
lot
of
ways.
If
you
look
at
that,
webinar
that
I
was
referring
to,
we
actually
talked
exactly
about
how
to
generate
that
table
yourself
as
well
anyways,
if
you
want
to
do
this
with
an
open
source
delta.
A
So
there's
a
lot
of
really
cool
things
that
you
can
basically
leverage
here
and
so
yeah
by
all
means,
go
ahead,
and
hopefully
that
helps
you
with
the
concept
of
like
how
to
differentiate
between
your
bronze
silver
gold.
When
it
comes
to
cc.
B
Yeah
and
also
like
there
are
some
good
documentation
around
how
you
can
go
with
the
medallion
or
architecting
the
medallion
architecture.
We
will
share
some
resources,
so
you
have
good
understanding
around
like
best
practices
as
well.
Thank
you
denny.
For
that
insight.
There
is
another
question
on
out
of
memory
issue.
So
sam
is
asking
hi
any
advice
on
troubleshooting
out
of
memory
issues
on
spark
delta
lake.
I
found
it
is
a
bit
difficult
to
correlate,
spark,
dag
and
sql
logic
plan
with
actual
code
lines
run.
C
Yeah,
this
is
a
pretty
like
a
challenging
problem
to
like
a
debug,
I
think,
yeah
for
any
auto
memory
issue,
because
this
can
happen
in
any
place
and
whatever
you
see
in
the
status
may
not
point
to
the
rear.
C
Like
a
memory,
the
place
letter
uses
a
lot
of
memory,
so
in
the
past
I
generally
just
try
to
like
do
a
hip
dump,
not
sure
if
it's
possible
or
in
your
environment,
but
for
me
I
usually
get
to
keep
tumbling
and
trying
to
look
at
what
kind
type
of
object
like
has
the
most
like
a
memory
like
a
in
usage
in
the
hypothetical
and
then
try
to
find
out
which
one
is
the
major
level
usage
and
then
debug.
These
issues
either
could
be
caught
by
delta
spark
or
your
code
yeah.
B
Yeah
perfect
and
as
ryan
mentioned
like,
if
you
have
any
specific
issue,
we
are
happy
to
look
at
it.
Maybe
post,
like
you
know,
have
a
conversation
with
us
on
our
delta
oss
slack
channel
and
we
will
be
happy
to
like
you
know,
go
through
your
issue
and
provide
any
suggestions,
hope
that
helps
them
great
question.
B
There
is
another
question:
it's
a
spark
streaming
job
opens
data
as
delta
on
hdfs
and
periodically
checking
the
transaction
logs
and
runs
compaction
for
uncom
on
uncompacted
partitions.
I
have
tried
adding
debug
log
in
the
code,
viewing
spark
history,
server,
dag
and
read
logs,
so
I
think
he's
just
expanding
on
like
what
problems
he's
having.
So
we
are
happy
to
take
a
look
at
it.
You
know
from
this
lag
so
sam,
please,
you
know
reach
us
out
reach
to
us
on
slack.
B
Thank
you.
Moving
over
to
you
know
the
question
about
delta
standalone
writer.
So
scott
you
mentioned
that
you
are
working
on
you're
working
on
this
feature.
Any
updates
like
what
people
can
you
know,
look
forward
to
in
the
roadmap.
D
D
Besides
some
the
awesome
ability
to
write
to
delta
tables,
there
was
also
a
variety
of
the
performance
improvements,
so
one
of
which
is
an
actual
iterator
api
that
people
can
use
to
to
read
from
delta
tables
and
that
absolutely
will
avoid
out-of-memory
issues
because
you're
no
longer
storing
the
entire
state
of
the
table.
The
latest
state
of
the
table
in
memory-
it's
just
a
very
efficient
iterator
api.
D
So
that's
an
exciting
thing
that
we
did
release
going
forwards,
we're
still
figuring
out
what
what
what
the
community
wants
but
feel
free
to
to
comment
on
our
roadmap.
You
know
and
give
us
suggestions.
B
So,
like
scott
said,
we
recently
released
the
feature.
Please
please
try
the
feature
give
us
the
feedback,
and
you
know
if
we
missed
out
on
anything
we,
the
community
would
take
a
look
at
it
awesome
and
then
there
is
a
question
around
how
you
know
the
delta
1.1
doesn't
su
just
supports
apaches
spark
3.2,
but
requires
it
right.
Yeah
and
the
question
is
right
for
delta
lake
1.1.
It
does
require
apache
spark,
3.2,
ryan
or
scott
anybody.
Any
questions
or
thoughts
around
it.
C
So,
what's
the
ask
immediately
so
try
to
support
like
all
the
smart
versions?
This
is
the
ask.
B
So
the
ask
kind
of
may
be
like
you
know:
if,
if,
for
example,
we
are
not
using
a
party
spark
delta,
can
deselect
can
still
work
on
other
platform?
Not
it
just
doesn't
require
apache
spark
right.
C
So
I'm
just
trying
to
understand
your
question
here
so
currently,
yeah,
basically
for
spark
itself.
Each
basically
data
level
has
tried
to
support
as
many
spark
versions
as
possible,
but
due
to
some
technical,
like
limitations,
we
have
to
pick
up
some
like
miners
back
version
support,
because
delta
is
kind
of
use,
use
a
lot
of
like
spark
private
apis
to
do
like
performance
improvements
and
a
few
new
features.
C
So
this
is
why
we
only
pick
up
one
minor
spar
version
to
support
each
data,
lakh
version
for
colors
like
a
presto
or
have
connector
so
for
hive
connectors.
Now
we
in
the
new
version
we
support
the
both,
have
two
and
have
three
and
for
presto.
I
think
that
we
are
working
with
the
presto
tv
community
to
add
edit,
the
presto
connector
to
the
latest
presto
version,
and
I
think
presto
is
kind
of
like
like
released
pretty
fast,
so
hope
you
can
upgrade
up
your
personal,
very
soon,
yeah,
all
right
and
danny.
A
No,
no,
I
mean
just
as
a
quick
call.
Thank
you
ryan.
I
mean
the
the
quick
call,
and
this
is
related
to
some
of
the
questions
about
that
I've
been
seeing
on
youtube
and
linked
about
the
manifest
file
the
manifest
file,
just
I'm
just
going
to
go
backwards,
a
little
bit
just
add.
So
I
can
answer
two
questions
at
once
when
it
comes
to
generating
the
manifest
file.
A
If
you
are
trying
to
debug
some
of
the
issues
that
we
have
honestly,
you
probably
want
to
join
us
in
the
delta
user
slack
and
ask
us
questions
there.
So
we
can
see
exactly
what
the
problem
is.
Okay,
it's
a
little
hard
for
us
to
do
this.
You
know
live
now,
seeing
that
the
key,
what
we
can
talk
about
real
high
level
from
a
manifest
file
perspective
is
that
it
is
a
file
that
is
generated
that
contains
a
list
of
the
files
that
make
up
that
version
of
the
delta
table.
A
Remember
delta
table
is
comprised
of
multiple
files
and
those
files
are
associated
with
different
versions.
So
that
way,
you
have
time
travel
and
historical
context
right.
So,
in
other
words,
your
own
version,
let's
just
say,
20-
of
the
table-
well,
the
manifest
generate
can
automatically
generate,
like
you
know,
when,
after
upon
insertion
or
update
or
whatever
you
do
to
it,
there
there's
the
x
number
of
files
that
are
associated
version
20.
and
then
bam.
There's
the
manifest
prefto
athena
can
go,
read
it
and
we're
good
to
go
now.
A
A
That's
why
now
here's
a
solution!
That's
why
scott
and
ryan
had
created
the
delta
standalone
reader.
It
allows
us
to
go
ahead
and,
at
the
point
of
read,
know
exactly
what
metadata
I.e,
what
set
of
files
that
are
associated
with
the
read
that
you
made
the
request
at
that
point
in
time.
Currently,
the
presto
db
reader
itself,
just
like
ryan
called
up,
is
actually
using
this
new
version
of
the
delta
standalone
reader
that
was
released
as
part
of
0.3
the
blog's
coming
out
soon.
A
I
think
it'll
be
next
week,
if
I
recall
correctly,
and
so
that
allows
us
to
go
ahead
and
be
able
to
read
without
the
need
of
a
manifest
file.
We
are
currently
working
with
the
athena
communities
and
the
trina
communities
based
off
of
that
same
code
base
that
we've
already
placed
into
presto
and
just
like
also
ryan,
called
out
hive
hive.
A
Two
hive
three
also
make
leverage
the
same
delta
standalone
reader
and
then
just
because
I
always
love
pitching
the
dust
and
still
to
stand
on
writer
that
same
delta
standalone
project
also
has
the
writer
and
the
upcoming
project
from
flink
is
actually
using
initially
we're
using
flink
to
do
to
also
do
the
writer
and,
oh
I'm
sorry,
I
did
realize
I
forgot
to
include
apache
pulsar.
They
actually
are
updating
their
reader
to
use
the
new
version
delta
standalone
readers.
So
hopefully
that's
enough
of
the
details
right
behind
it.
A
But
again
in
terms
of
debugging
more,
I
would
highly
suggest
joining
the
delta
I
o
user
slack.
So
that
way
we
can
probably
dive
deeper
into
those
concepts.
Okay,.
B
Yeah
thanks,
jenny
and
ryan.
I
think
there's
one
feature
that
we
released
the
standalone
reader
writer
in
the
in
december
and
I
think
that
has
opened
room
for
a
lot
of
connectors
as
well.
As
you
know,
you
know
people
to
use
like
non
non-standard
spark
versions.
So
that's
a
great
feature.
There's
one
question
around:
can
we
have
a
brief
about
delta
lake
and
its
uses?
B
We
certainly
can,
but
just
from
the
panel
like
if
you,
if
anybody
wants
to
take
tackle
this
question-
and
you
know
just
throw
in
some
use
cases
for
vishal.
That
would
be
great.
E
B
E
Question
is
about
delta
in
its
use
cases.
Yes,
so
I
think
the
one
of
the
biggest
usage
use
case
that
we
have
is
just
being
able
to
unify
all
the
streaming
and
batch
workloads
into
a
single
location
as
a
consumer
of
the
data
source.
E
We
can
just
consume
it
as
a
streaming
source
and
then,
as
a
producer,
both
batch
job
instruments
are
coming
right
to
the
saving
location,
which
makes
it
really
flexible
for
us
and
really
simplifies
our
stack.
We
don't
have
to
separate
these
two
different
types
of
workloads
into
different
systems.
I
think
that
that's
a
great
benefit
of
using
delta.
B
Yeah
couldn't
have
asked
this
question
to
any
other
panelists
better
than
the
consumer
of
delta
lake
itself,
thanks
qp
with
that,
I
think
we
have
tackled
most
of
the
questions
and
I
don't
see
any
more
but
happy
to
go
over
any
questions
as
a
follow-up.
You
know,
I
guess
we're
a
bit
closer
to
the
time.
So
thank
you
all
for
joining
us
and
it
was
an
exciting
panel.
We
shared
great
insights.
B
These
office
hours
are
scheduled
monthly
if
you
have
any
feedback
around
like
frequency
of
this
office
hours
as
well
as,
like
you
know,
if
you
have
questions
in
the
meantime,
please
join
us
through
slack
google
group,
or
we
also
have
github,
so
please
participate
in
the
community
and
looking
forward
to
having
conversations
with
you.
Thank
you.
The
thank
you
to
the
panel
yeah.
A
Oh
once
quick
call
out
sorry,
we
actually
do
them
bi-weekly
now,
so
so
yeah
just
so
every
two
weeks
we'll
actually
have
these
convenient
office
hours.
So
if,
if
we
didn't
even
answer
your
question
now,
you
can
always
wait
till
next.
In
two
weeks
from
now
and
just
like
vinnie
called
out,
please
join
us
at
delta
user
slack.
We've
got
lots
of
members
and
we're
all
very
active,
not
just
the
folks
here,
but
there's
a
lot
of
other
people
that
are
super
active
that
are
that
can
provide
answers
as
well.
B
That's
great
yeah,
thank
you,
denny,
and
you
know
there
were
the
swags
from
the
past
session
any
status
on
that
danny.
B
The
roadmap
survey
we
had
announced
this
fight.
A
Yes,
everybody's
wondering
about
the
swag:
yes,
the
swag
has
actually
already
been
sent
out,
but
it
started
about
two
weeks
ago.
Our
fulfillment
process
actually
already
initiated.
You
should
all
receive
it
within
the
next
two
to
four
weeks,
if
you
don't
definitely
ping
us,
the
delta
usa
slack
under
events
channel.
So
we
can
go
and
follow
up
with
with
that,
because
it
was
all
sent
out
already
and
just
just
little
tidbit
call
out
since
we're
ending
right
now,
2022
h1
roadmap
will
be
coming
out
very
soon.
B
That's
awesome,
thank
you,
denny
and
thank
you,
ryan,
scott
qp,
great
having
you
and
great
having
the
audience
we'll
look
forward
in
two
weeks.
Bye.