►
From YouTube: Delta Lake Community Office Hours
Description
Join us for the next Delta Lake Community Office Hours and ask us your #DeltaLake questions. Thanks!
A
Wait
hold
I
oh
sorry,
I
I've
got
too
many
youtube
windows
open.
So
now
now
I
don't
know
which
one's
which
so
it's
okay,
I
think
I've
got
it
figured
out
now
all
right
and
now
let
me
go
ahead
and
get
linkedin
going
there.
We
go
perfect,
we
do
have
linkedin
open
and
we
have
a
bunch
of
people
all
right,
perfect,
link,
linkedin
folks,
thank
you
very
much.
We
are
off
to
the
races
with
today's
delta
lake
community
office
hours.
A
Just
give
us
one
more
minute
to
let
some
additional
folks
come
in,
but
by
the
same
token,
if
you
do
have
any
questions,
please
chime
in
either
through
your
linkedin
comments
or
through
the
youtube
delta
lake
youtube
channel
comments.
Either
one
works.
We
will
be
monitoring
them
and
trying
to
answer
your
questions
so
give
us
a
one
or
two
more
minutes
to
get
ourselves
started
and
then
we'll
be
off
to
the
races.
A
If
you
guys
want
to
chime
in
right
away
and
thanks
zane
for
noting
that
we're
live,
we
we
just
couldn't
look
into.
I
couldn't
log
into
linkedin
for
some
reason,
but
hey
want
to
chime
in
and
tell
us
where
you're
based
out
of
well,
which
city
you
are
you're
from
either
from
the
youtube
or
from
linkedin.
My
name
is
denny
lee
and
I'm
actually
based
in
the
seattle
washington
area,
a
very,
very
rainy
seattle,
washington
area
today.
B
Have
yourself
qp,
I'm
qp,
I'm
based
in
bay
area
east
coast.
It
was
rainy
for
the
last
two
days,
but
it's
all
sunny
now
so.
C
Yeah,
I'm
ryan
and
a
software
engineer
and
kindly
live
in
bay
area.
A
Cool
cool
cool
all
right.
Well,
I
think
we
can
probably
start
now.
So
in
that
case,
let's
go
ahead
and
just
dive
into
it
again.
If
you
have
any
questions,
please
go
ahead
and
type
them
into
the
youtube
comments
or
to
linkedin
comments.
We
are
monitoring
them
now
and
we
will
go
ahead
and
answer
those
questions.
Okay,
but
we'll
probably
start
with
a
little
bit
about
in
terms
of
where
we
are
with
the
delta
roadmap.
So
I'm
going
to
go
ahead
and
ask
some
questions
for
ryan
a
little
bit
about.
A
Sorry.
Excuse
me
a
qp
first
about
delta
rust,
where
we're
currently
at
and
then
I'm
going
to
ask
a
little
a
few
questions
to
ryan
in
terms
of
where
we're
currently
at
with
the
delta
roadmap.
Specifically,
I
believe
we
had
a
lot
of
questions
about
apache
spark
3.2
support,
so
I
figured
we'd
go
ahead
and
dive
into
that
too.
So,
first
things
first
go
ahead
and
qp,
don't
you
tell
us
a
little
bit
of
what
we're
currently
at
with
the
delta
rust
api
yeah?
A
We
had
a
webinar
a
few
weeks
ago
for
the
kafka
built-in
just
being
a
production
of
scribd.
Anything
else
that
you
want
to
add
in
terms
of
like
what
are
the
hot
topic
items
or
things
that
people
want
to
work
on,
and
I
actually
have
a
particular
item
that
I
think
that
I
want
to
ask
people
to
join.
But
I
wanted
you
to
start
first
yeah.
B
So
I
think,
at
least
for
me,
there
are
two
major
items
that
we
were
working
on.
One
is
reducing
the
memory
usage
of
delta
rs
for
really
large
tables.
We
noticed
that
it's
actually
less
efficient
than
the
scalar
implementation,
so
we
identified
some
gaps
that
we
can
optimize
to
reduce
the
usage
memory
switch
by
you
know,
10
to
100
a
hundred
times.
So
that's
what
the
community
is
working
on,
we're
working
on
a
design
for
that
right
now
and
then
the
other
thing
is.
B
I
am
experimenting
with
the
new
parquet
implementation
in
rust,
which
is
supposed
to
have
5
to
10x
speed
improvement.
I'm
also
rewriting
the
whole
checkpoint
parsing
logic
to
go
for
in
column
to
parse
the
data
it
column
by
column.
Is
that
row
by
row.
So
that's
that's
another
item
that
I'm
working
on
so
mostly
a
performance
improvement
at
this
point.
Other
than
that,
I
don't
think,
there's
anything
major
going
on.
A
Okay
cool
well,
I
did
have
a
couple
questions
and
then
I'm
gonna
actually
answer
a
couple.
Questions
live
and
then
I'm
gonna,
then
I'll
switch
to
ryan
and
then
I'll
switch
to
scott
scott's.
Actually,
here
we
we
did.
He
does
have
for
some
reason,
ryan's
name,
so
I'm
not
sure
what
that
happened,
but
that's
actually,
scott,
not
ryan.
There's
there's
not
two
rides.
Okay,.
A
Oh
interesting,
okay,
I'm
not
sure
why
that
happened,
but
all
right
I'll
see
what
I
can
do
to
try
to
get
her
in.
Meanwhile
for
uqp,
because
I'm
going
to
start
with
you,
I'm
just
curious
our
what's
our
when
what
is
the
timeline
to
actually
take
some
of
the
the
writing
capabilities,
that's
in
kafka
delta
interest
to
be
basically
added
back
into
delta,
rus
primer,
the
core.
I'm
just
curious.
B
I
don't
think
we
have
a
specific
timeline
for
this,
but
it's
more
up
to
the
community
we're
mostly
waiting
for
pull
requests
from
from
scripts,
and
we
don't
have
this.
We
don't
have
a
gender
for
that,
but
if
anyone
is
interested
in
that,
please
feel
free
to
take
the
code
from
cloud.js
and
merge
that
into
the
other
rs.
B
I
think
it's
it's
ready
for
merch
at
this
point.
A
Oh
that's
good
to
hear
okay
cool.
Well
then,
this
is
where
I
will
do
my
usual
thing,
which
is
and
say,
hey.
You
know,
check
us
out
at
delta,
dot,
io
and
then
also
we're
actually
all
on
the
delta
user
slack
channel,
so
you
can.
In
addition
to
talking
to
all
of
us,
you
can
also
go
ahead
and
ask
your
questions.
A
There,
no
problem
at
all,
and
if
you
of
course
want
to
do
any
pull
requests
or
any
help
us
with
any
merges,
whether
it's
the
delta,
rust
or
delta
core
or
anything
else.
This
is
a
great
way
to
talk
to
all
of
us,
okay,
so
perfect.
Thank
you
very
much.
Qp
appreciate
that
ryan.
By
the
way
we
have
a
couple
other
questions
coming
in
on
both
linkedin
and
youtube.
A
We
will
answer
them
live,
but
I
did
want
to
give
ryan
and
scott
a
little
bit
of
a
to
chime
in
also
to
about
what
we're
currently
at
so
I'm
gonna
go
to
rhyme
ryan
want
to
do
a
quick
introduction
of
who
you
are
for
starters
and
then,
after
that,
my
question
to
you
will
be
about.
What's
the
current
status
for
the
delta
1.1
release
and
apache
spark
3.2
support.
C
Yeah
so
yeah
I'm
right
and
currently
mostly
working
on
delta
lag
projects
and
for
delta
1.1
release.
We
are
kind
of
working
on
this
and
it
should
be
about
available
in
like
two
weeks
less
than
two
weeks,
and
this
will
support
scala,
a
spark
3.2
also
scalar
to
13,
which
is
because
spotlight
is
already
supposed
for
scalar
213,
and
it
is
pretty
great
to
see
we
are
upgrading
a
scalar
version
to
like
a
better
and
also
more
powerful,
scalar
version.
C
A
Excellent,
that's
very
helpful.
Thank
you
very
much
ryan
and
again
to
everybody.
That's
doing
linkedin
youtube.
We
are
looking
at
our
questions
and
we
will
answer
them
shortly.
I
just
wanted
to
make
sure
we
had
the
quick
status
update
for
everybody
first,
so
scott.
You
threw
me
off
again
because
I
saw
your
name
and
said
brian
again,
so
I'm
like
take
it
okay,
scott!
A
Can
we
have
the
current
staffs
update
on
both
the
delta
standalone
writer
and
also
the
flink
delta
connector
and
then
and
then
I'll
answer
some
questions
concerning
presto,
but
oh
first,
first
things.
First,
I
want
to
introduce
yourself
real,
quick
and
then
then
those
questions
please.
D
Sounds
good
and
good
morning,
everyone
hi,
I'm
scott,
I'm
a
software
engineer
on
the
delta
ecosystem
team
here
at
databricks
and
I'm
working
on
a
bunch
of
open
source
projects
right
now,
including
the
delta
oss
1.1
released
with
ryan.
So
the
two
main
projects,
though,
that
I'd
like
to
talk
about
are
the
delta
standalone
writer
and
our
flink
sync
connector,
that's
also
being
actively
developed
right
now,
so
the
standalone
writer
we're
hoping
to
release
next
coming
weeks.
D
We
just
got
to
release
delta
oss
1.1
first
and
then
all
of
our
efforts
will
be
on
the
standalone
writer
and
that's
really
exciting,
and
I
can
talk
about
some
of
the
improvements
and
features
that
that's
adding
later
on
and
for
the
flink
connector
progress
is
going
steady,
we're
just
in
kind
of
like
a
qa
and
testing
phase
right
now
we
are
reviewing
the
code
making
sure
it's
up
to
our
standards
and
it's
looking
great
and
just
double,
checking
all
the
edge
cases
as
well
through
unit
tests
so
coming
along
really
well.
A
Excellent,
thank
you
very
much.
I
really
appreciate
you
doing
providing
that
context,
and
so
finally,
I
did
want
to
give
a
quick
shout
out.
Vinky
and
sudet
were
not
able
to
join
today,
although,
but
I
did
want
to
call
it
that
we're
also
doing
having
some
solid
traction
on
the
presto
to
delta
reader
as
well.
Okay
as
another
integration
as
well
so
right
now,
right
now
we
both
have.
A
We
have
both
the
presto
code
and
the
trino
code,
the
trina
code's
actually
being
tested
as
we
speak,
so
it's
leveraging
the
delta
standalone
reader
in
order
to
be
able
to
well
read
so
presto
and
trino
can
now
read
from
delta
lake
without
using
the
manifest
file.
A
So
so
I
want
to
give
that
quick
update
as
well
all
right
so
perfect,
now
diving
into
the
question,
so
I'm
gonna
go
ahead
and
ask
the
first
question
and
actually
this
one
is
from
oscar
from
linkedin,
which
is
what
actually
is
delta
lake.
Why
is
delta
lake
important
so
just
an
introductory
of
what
delta
lake
is
so
I'm
gonna
open
the
form
to
either.
You
know
to
any
of
you
three
on
what
on
providing
that
answer,
so
anybody
want
to
take
take
a
stab
at
that
one.
First.
A
C
C
With
this
we
can
provide
a
live
ac
transaction
support,
so
you
don't
need
to
worry
about
like
a
container
queries
and
for
all
these
transaction
log
formats.
We
are
using
that
open
format
such
as
parque
and
json,
which
we
can
leverage
spark
to
process
these
logs
in
parallel
on
your
by
using
your
whole
cluster,
to
speed
up
all
the
metadata
like
operation
yeah
and
in
addition,
we
also
provide
a
lot
of
like
dml
commands
such
as
merge
update,
delete,
which
you
can
modify
your
table
and
also.
C
Lastly,
we
also
provide
a
lot
of
like
our
own
spatial,
like
a
combined
such
as
described
history.
You
can
code
your
like
opportunity
in
your
table
by
looking
at
the
history
of
your
table.
A
Perfect
ryan,
thank
you
very
much
and
so
I'll,
just
chime
in
I
actually
just
realized.
I
forgot
to
introduce
myself.
My
name
is
denny
lee,
I'm
a
developer
advocate
here
at
databricks.
I
want
to
actually
add
to
ryan's
question
just
because
I'm
formerly
a
database
guy,
so
I
actually
used
to
be
part
of
the
sql
server
database
team
and
the
one
thing
I
want
to
add
to
ryan's
call
out
is
that
inherently
what's
great
about
databases
is
the
fact
that
you
had
acid
transitions
to
protect
the
data
and
you're
going
like
hey.
A
I
have
a
data
lake
with
all
this
data.
Do
I
want
to
protect
that
data,
and
so,
ultimately,
that's
what
it
boils
down
to
taking
the
transactional
protection
of
your
database
and
actually
going
and
applying
that
to
your
data
lake?
Now,
what
ryan
called
out
is
all
these
cool
features
and
the
abilities,
because
we
have
those
asset
transactions
to
protect.
Your
data
now
bam
we're
good
to
go
right,
and
so
that's
the
important
thing
and
the
important
call
out.
A
So
hopefully
that
answers
your
question
oscar
qp
or
scott,
if
the
anything
else
that
I
might
have
missed
or
ryan
or
I
might
have
missed
otherwise,
we'll
switch.
The
next
question.
B
B
All
read
this:
the
other
thing
I
would
add
is
that
it
really
made
it
a
big
paradigm
change
when
you
can
do
both
streaming
and
batch
in
a
single
data
source,
a
single
table,
and
that's
that
that
made
a
huge
difference
for
us
as
good.
A
By
the
way,
if
you
did
fill
out
the
delta
lake
survey,
we
actually
had
more
than
630
people
that
actually
filled
out
the
delta
lake
survey.
You
will
actually
get
this
t-shirt
by
the
way.
I
just
want
to
call
that
out.
So
we
will
be
doing
this
again,
every
every
probably
half
a
year
with
the
with
you
to
fill
out
the
survey
so
yeah,
but
and
if
you're
wondering
yes,
we
are
going
to
send
them
out.
It's
just
that
because
of
their
630
shirts.
A
I
needed
to
order
more
some
more
shirts
first
before
I
could
set
them
out.
So
my
bad
that
didn't
realize
it
turned
out
to
be
that
popular.
So
all
right
next
question
from
yusuf,
so
yusuf
actually
has
a
question
and
I'm
going
to
pose
this
to
anybody
here
has
a
question
about
the
about
unstructured
data
specifically
about
images
is
yusuf
from
youtube
he's
wondering
like.
A
Can
you
actually
store
image
data
or
unstructured
data
in
parquet
files
or
into
delta
lake,
and
what
is
the
impact,
especially
when
you're
trying
to
deal
with
like
for
sake,
argument,
image,
data
or
unstructured
data
when
you're
dealing
with
like
connecting
from
kafka
to
delta
lake?
So
there's
that's
a
two-parter
basically,
so
anybody
want
to
take
a
quick
dive
into
that.
First.
B
B
So
it's
definitely
possible
to
store
any
kind
of
data
into
data
table,
but
the
the
only
thing
to
be
aware
of
is
you
have
to
encode
it
before
you
store
into
data
table,
because
a
delta
lake
in
the
official
spec
will
only
support
a
list
of
types
for
for
the
columns.
So
you
have
to
encode
that
I
think
we
do
support
binary
string
in
data
table.
So
that's
I'm
guessing
that's
how
you
would
encode
those
data
into
data
tables.
B
We
actually,
I
think
the
main
benefit
you
get
from
doing.
That
is,
if
your
like
say,
images
is
stored
in
s3.
If
you
want
to
query
that
you
have
to
do
individual
s3
api
calls
to
get
those
images,
but
if
you
saw
in
data
tables,
not
only
do
you
get
asset
support,
but
also
you
can
do
batch
load
from
of
those
binary
data
more
efficiently.
Instead
of
issuing
individual
s3
calls.
A
Excellent,
thank
you
very
much
anything.
Hopefully
yousef.
We
did
answer
your
question
if
it,
if
it's
not,
please
go
ahead
and
chime
in
through
youtube.
So
that
way,
you
know
we
can
go
ahead
and
continue
answering
that.
Okay,
all
right,
let's
switch
over
to
linkedin
again
so
linkedin.
A
Let's
see
here,
oh
shiv
actually
has
a
great
question
and
again
open
the
door
for
everybody
that
they
have
a
business,
need
to
scan
and
search
the
entire
delta
lake
of
thousands
of
tables
to
find
all
occurrences
of
a
email
address,
and
so
do
we
have
any
any
ideas
on
how
to
make
that
particular
query
somewhat
efficient,
and
so
I
do
want
to
call
out
that
if
it's
just
a
bunch
of
like
you
know,
we
are
we're
not
looking
at
the
data
in
any
form
of
structured
way.
A
Then
it
almost
be
better
just
to
not
even
look
at
it
as
a
as
a
set
of
tables.
It'd
be
better
just
to
go
ahead
and
literally
run
some
distributed.
Rdd
type,
queries
and
just
say:
look
for
a
particular
email,
but
assuming
there
is
some
structure
I
think
then.
Yes,
I
mean
qp,
ryan
or
scott
if
you've
happen
to
have
any
context
on
how
a
type
of
query
like
that
would
be
more
efficient.
C
Yeah,
especially
for
such
use
case,
I
actually
suggest
you
use
like
only
a
few
tables
and
like
thousands
of
tables.
This
is
basically
in,
because
delta
is
kind
of
using
transaction
loss.
If
you
have
like
thousands
of
cables,
then
we
need
to
load
like
thousands
of
tables
like
a
transaction
load
which
basically
is
very
slow
when
you
are
using
delta.
But
if
you
have
like
a
giant
table
with
like
a
and
putting
other
thousands
of
tables
into
one
table,
then
it
will
be
much
faster
and
then
it
will
scale
very
well.
A
I
think
so
I
believe
it
answers
the
question,
so
I
believe
if
it
does
not
answer
your
question,
please
chime
into
linkedin
and
we'll
do
our
best
to
follow
up,
but
I
believe
that
answers
your
question
there.
Okay,
let's
see,
let
me
go
ahead
and
switch.
There
is
a
question
from
marcin
on
linkedin
and
said:
he's
asking
the
question:
I'm
using
delta
lake
with
table
level
transactions
for
writing
data,
what
would
be
best
to
approach
to
tackle
multi-table
right
transactions
and
roll
back
okay.
A
So
this
is
the
old
multi-table
right
transactions
of
rollback
when
it
comes
to.
Since
since
delta
lake
supports
a
single
table
transactions
and
you
can
roll
it
back
by
basically
overwriting
from
a
previous
version
by
a
time
travel.
How
would
we
potentially
resolve
that
problem
using
multi-table
rights
and
I'll
open
the
floor
for
that
question?
But
by
same
token,
I
happen
to
have
an
answer
too.
So,
just
just
just
in
case
so
but
anyways
go
ahead
and
chime
in
anybody
here,
first
and
then
I'll
chime
in
afterwards.
C
Yeah
so
basically
delta
doesn't
support
like
multi-table
transactions,
and
you
can
basically
post
your
comment.
You
know
like
a
dot
map,
github
issue,
and
then
we
are
if
this
gets
a
lot
of
like
uploads.
We
are
considered
to
think
of
think
about
how
to
design
this.
This
is
a
pretty
challenging
problem
and
it
probably
will
take
like
a
lot
of
time.
Think
of
thinking
about
how
to
design
this
so
yeah.
But
currently
we
don't
have
a
correct
answer
for,
like
such
a
use
case.
A
Right
and
so
saying
that
there
actually
are
two
other
and
potential
answers
that
I
can
potentially
provide
for
this.
Okay,
so
in
terms
of
multi-table
transactions,
there's
one
which
is,
basically
you
add
the
transaction.
I
now
this
one
you're
doing
what
yourself?
Okay,
it's
not
for
the
faint
of
heart,
so
I
admit
I
do
want
to
call
that.
In
other
words,
you're
gonna
need
to
write
some
code.
Okay,
but
basically
you
take
the,
since
you
can
actually
alter
the
metadata
itself
within
the
transaction
log.
A
You
can
basically
go
ahead
and
take
the
transaction
id
of
each
transaction
for
each
table
and
then
upload
that
information
in
the
alt
in
the
metadata
as
well.
So
that
way,
whenever
you
have
to
roll
back,
you
know
which
one
to
roll
back
to
now.
That's
a
lot
of
code
that
you
have
to
write
yourself.
So
people
are
like
dude
what
the
heck!
Okay!
Is
there
just
something
that's
out
of
the
box
available
now
as
ryan
answered
right
now?
A
We
don't
have
it
just
because
it
is
a
very
challenging
problem,
but
we
can
certainly
go
ahead
and
if
the
community
is
asking
for
it
absolutely
we
can
go
ahead
and
start
thinking
about
that
problem,
but
there's
also
happen
to
be
solutions
from
lake
fs
and
also
from
nessie,
which
is
to
basically
give
you
that
git,
like
transaction,
did
like
functionality.
A
Excuse
me
for
your
data
lakes
and
they
do
inter
they
have
various
levels
of
integration
with
delta
lake
right
now,
in
fact,
we're
working
closely
with
the
with
both
the
lake
fest
and
the
nesting
community
to
do
exactly
that.
Okay,
we
recently
had
a
talk
with
lake
fest.
Actually,
the
lake
with
paul
singham
and
myself
for
lake
fest
and
delta
lake
of
functionality
and
nessie.
We
actually
by
the
way,
have
commute
regular,
open
community
meetings
with
the
nesting
community
every
two
weeks
or
so.
In
fact,
I
believe.
A
The
next
meetings
is
this
coming
tuesday,
so
yeah,
if
you
happen
to
need
it
right,
this
second
there's
there's
actually
those
particular
three
routes
and
again
so
those
are
custom
code
with
altering
the
metadata
lake,
fest
and
nessie,
and
then
again
absolutely
go
ahead
and
create
a
github
issue
and
and
get
votes,
because
then
that
actually
helps
us
understand
what
the
community
asks
are
to
potentially
go
ahead
and
work
on
that
faster
okay.
So
hopefully
that
answers
your
question.
A
I
think
that
was
from
marcin
great
question
for
from
you
and
hopefully
that
helps
answer
your
question.
Okay,
all
right,
so
I
do
have
a
a
support.
Question
from
youtube
that
I
thought
was
applicable
says:
does
delta
support,
struct
types
and
array
types
apparently
pavon
had
tried
to
do
this
in
the
past,
but
it
didn't
work
and
had
to
write
directly
to
4k,
so
yeah.
Let
me
just
pose
the
question
open.
Like
does
delta
support,
struct
types
and
array
types.
D
A
Yes,
in
fact
just
I
want
to
add
in
as
of
delta
lake
0.80,
if
I
recall
correctly,
we
also
even
had
merge
support
for
struct
types
and
array
types
as
well,
so
so
it
was
supposed
to
be
there.
So
again,
if
you
do
run
into
any
issues,
please
let
us
know,
because
that
should
have
worked
basically
so
yeah,
but
again
you
know
we
maybe
there's
a
bug.
So
again,
if
you
got
a
repro,
let
us
know.
A
Okay,
sorry,
I
meant
to
click
on
linkedin.
I
clicked
on
the
new
button
by
the
accident.
Sorry
about
that
all
right,
let's
see:
hey
ryan.
There
is
a
question
from
the
community
about
json
support.
I'm
sorry,
I'm
trying
to
find
it
again.
Let
me
try
to
find
that.
I'm
sorry
guys
this
linkedin
is
pretty
long
now.
So
I'm
having
a
hard
time
finding
everybody's
question.
A
Okay,
I'm
sorry,
if
you
can,
I
don't
see
the
json
question
here
any
longer,
I'm
not
sure
why
so
I'm
gonna
go
ahead
and
chime
in
on
some
other
questions
and
for
the
person
that
was
asking
the
json
question.
Please
go
ahead
and
put
it
back
in.
I
actually
did
want
to
try
to
answer
it,
but
I
can't
seem
to
find
it
for
some
reason:
okay,
and
by
the
way,
my
hair,
I
love
your
teeth.
Thank
you.
I'm
glad
you
love
the
t-shirt.
A
Like
I
said
we
will
send
out
a
survey
in
the
new
year
and
I'll
this
time,
I'll
order
more
shirts,
so
that
way
we'll
be
better
prepared
and
you'll.
Basically,
we,
this
is
what
we
usually
do,
which
is.
We
want
you
to
fill
our
survey
to
tell
us
what
is
important
for
us,
as
the
delta
lake
community,
to
be
working
on
and
in
return
for
you
spending,
10
minutes
of
your
time
to
tell
us
this
information.
We
give
away
teachers
so
anyways
all
right
all
right.
A
So
there
is
a
great
question
from
sraman.
I
apologize.
If
I
say
your
name
correctly,
any
plans
on
race
leasing
need
to
connect
a
new
delta
connectors
for
apache
hives.
C
A
Excellent
now,
let's
assume
that
for
now,
if
you
had
something
shraman,
if
you
actually
had
something
else
that
you
were
asking
specifically,
let
us
know,
but
just
like
ryan
called
out,
we
already
have
hive
2
support
and
high
3
supports
coming
out
quite
soon,
okay
and
to
june
you're
going
oof,
I'm
assuming
that
has
to
do
with
the
t-shirts.
So
my
apologies
again
survey
will
go
out
and
we
can
chime
in
ala
riza.
I
again
apologize
if
I'm
saying
your
name
incorrectly
is
delta
lake,
a
proper
for
real-time
analytics.
A
Okay,
that's
the
question.
Is
delta
lake
proper
for
real-time
analytics
qp
by
a
chance
you
want
to
dive
in,
since
you
seem
to
do
a
lot
of
analytics.
B
Yeah,
I
guess
it
does
depends
on
what
you
what's
your
definition
for
real
time.
Is
it
sub
seconds?
Is
it
some
minute
latency?
So
I
think
delta,
like
it's
pretty
good,
when
the
latency
requirements
is
too
not
strict,
it's
not
too
strict.
For
example,
if
you're,
okay
with
you
know,
getting
updates
from
every
couple
minutes
or
every
10
minutes,
it's
actually
pretty
good
and
it's
able
to
handle
that
load.
If
you
want
sub
second
latencies,
then
it's
not
gonna
work.
B
We
do
a
lot
of
so
if,
if
we
do
a
lot
of
real-time
updates
for
our
data
tables
and
consume
those
in
them
in
our
dashboard
and
typically
we
get.
You
know
second
minute
about
minutes
updated
dependencies
for
our
dashboard
and
it's
been
working
really
well
for
us.
A
Right
and
then
just
to
add
to
qp's
call
out
right
if
you're
trying
to
do
even
faster.
That's
when
you
either
you
structure
streaming
itself
or
you
use
flink
and
that's
and
yes,
flink
the
competitor
to
spark
per
se,
but
there's
a
flanked
delta
connector.
A
So,
yes,
we
all
believe
that
this
is
exactly
all
of
us
need
to
use
something
to
ensure
that
reliability
to
your
data,
which
is
why
we
have
a
flink
delta
connector
and,
incidentally,
we
actually
are
working
with
the
folks
over
at
viverica,
the
formerly
known
as
data
artisans,
specifically
to
go
help
this
process
right.
So
so,
just
to
let
you
know
so
if
you
do
need
to
do
that
sub.
A
Second,
this
is
where
the
flank
or
spar
spark
structure
streaming
scenario
comes
in
and
then,
when
you
want
to
once,
you
can
write
the
data
to
your
delta
lake.
Your
data
lake,
excuse
me
your
delta
lake
and
then
from
there
no
problem
at
all,
and,
incidentally,
for
some
folks
that
want
to
do
cdc
based
off
it.
Paul
roman
myself
had,
I
think
earlier
this
year
had
done
a
using
delta
as
your
cdc
source.
So,
just
in
case
you
want
to
that
particular
frame.
That's
a
pretty
cool
video
as
well.
A
Okay,
so
hopefully
that
answers
your
question
alright
and
again
number
one.
I
apologize
if
I
butchered
your
name
number
two.
Hopefully
I
answered
your
question:
if
you
have
a
follow-up,
we've
got
about
two
minutes
left,
but
if
you
have
a
fall,
please
do
chime
in
there's
a
great
question
from
remy.
Do
we
plan
to
allow
drop
columns
in
a
delta
table,
I'm
going
to
ask
either
scott
or
ryan
for
that.
C
Yeah
we
do.
I
do
topologies.
This
is
already
we
are
kind
of
working
on
this
and
it's
likely
it's
unlikely
in
1.1
release,
but
we
are
expected
to
make
this
1.2
release
probably
happening
next
early
next
year.
A
Thanks
very
much
okay,
so
june.
Thank
you
very
much
for
chiming
in
about
charlemagne's
question,
there's
apparently
a
json
compatibility
issue.
Okay,
so
apparently
we
had
talked
to
you
ryan,
actually
about
that.
So
I'm
just
curious
about
the
by
intense.
Did
you
happen
to
recall
what
this
json
capability
issue
with
defaulted
lake
was
in
the
past?
It
actually
resulted
in.
I
believe
something
like
an
array
out
of
exception
issue.
I
apologize
that
that
I'm
putting.
C
A
C
A
That
june,
thank
you
very
much
for
for
propping
that
question
back
in
sherman.
If
you
can
go
ahead
and
create
a
github
issue,
and
let
us
know
basically
the
four
of
us
here
and
a
bunch
of
other
engineers,
we
are
actively
monitoring
it
and
if
you
can
chime
in
there,
we'll
definitely
go
ahead
and
answer
your
question
there.
The
other
place,
of
course,
is
to
go
ahead
and
ping
on
slack
as
well,
and
so
hopefully
to
answer
your
question,
I
will
go
ahead.
A
Let's
see,
okay,
I'm
going
to
go
ahead
and
actually
probably
end
today's
session,
because
there
are
some
great
questions
remaining
questions.
I
apologize
if
I
missed
them,
but
just
we
have
one
minute
left.
There
are
some
compete
questions
here
and
by
the
way
we
do
want
to
be
very
clear.
We
do
not
do
complete
answers.
A
Okay,
I
I
have
no
problem
like
this
is
where
we're
going
to
go
ahead
and
ask
answer
questions
in
terms
of
how
delta
lake
works
or
how
you
can
use
delta
lake
when
it
comes
to
comparisons
with
other
technologies,
where
we
won't
do
it
here,
because
we
don't
think
we'll
do
we'll.
Do
fair
justice
for
the
other
technologies
either?
Okay,
so
I
think
we
can
honestly
say
that
the
four
of
us
and
anybody
else
that
do
these
community
office
hours
we're
slightly
biased.
So
I
don't.
I
don't
want
to
pretend
otherwise.
A
Okay,
so
saying
that
I
apologize
if
we
haven't
answered
all
your
questions,
but
please
join
us
at
the
delta
user
slack.
You
can
continue
asking
your
questions
there
and
as
well
go
ahead
and
join
the
the
github
so
go
to.
Excuse
me,
the
delta
lake
github,
or
you
can
ask
your
questions
there
as
well,
so
those
are
two
great
places
and
of
course,
stack
overflow
too.
So
without
further
ado,
I
want
to
thank
qp,
ryan
and
scott's
time.
I
appreciate
you
guys
chiming
in
again
join
us
at
delta..