►
From YouTube: Delta Lake Community Office Hours
Description
Join the Delta Lake Community Office Hours to ask your questions on all things Delta Lake! You can also join us in the Delta Lake Users Slack at: https://dbricks.co/delta-users-slack. Thanks!
A
Stop
talking
about
my
laundry
room,
okay,
yeah!
Thank
you.
Thank
you
very
much
all
right.
Thank
you
very
much
everybody.
We
are
currently
live
for
today's
delta
lake
community
office
hour.
We're
live
streaming
to
youtube.
At
least.
I
think
we
are
all
right
so
give
me
one
second,
because
I'm
actually
hearing
a
repeat
all
right.
A
All
right.
There
we
go.
Okay,
you
guys
still
hear
me
good,
all
right,
yeah!
I
need
a
knife
from
the
the
panelists.
Please,
yes,
excellent,
excellent!
Okay,
so
we
are
live.
So.
Thank
you
very
much
so
for
folks
that
are
on
youtube.
Please
go
ahead
and
ask
us
questions.
We're
gonna
do
a
quick
start
which
is
introductions
from
all,
so
I'm
gonna
just
do
it
based
off
of
the
panel
that
we
have
here
and
actually
I'm
gonna
stop
sharing,
and
so
we
could
just
actually
have
a
panelist
view.
A
So
let
me
switch
to
the
gallery
view
and
I
will
start
with
from
my
left
qp.
I
want
you
to
do
a
quick
introduction
of
who
you
are
and
why
you've
decided
to
bless
us
with
your
presence
on
the
delta
lake
core
community
office
hours.
All
right
thank.
B
C
Hi
everyone
I
am
vinnie
jesus
developer
advocate
at
databricks.
I
have
worked
with
a
lot
of
companies
in
digital
natives
unicorns.
You
know
fortune
500
companies
before
I
worked
with
city,
so
I
helped
data
practitioners
to
be
successful
on
building
on
apache
spark
ml
flow
delta
lake
and
any
open
source
technology.
So
I
hope
I
can
add
value
by
answering
your
questions
or
if
you
want
to
collaborate,
I'm
here
for
you
so
yeah.
That's
about
me.
D
Hey
everyone,
I'm
scott,
I'm
a
software
engineer
on
the
delta
ecosystem
team
here
at
databricks.
I'm
here
today
to
answer
any
questions
I
can,
and
you
know
the
the
current
project
I'm
working
on
is
like
a
single
jbm,
sparkless
writer
into
a
delta
table.
So
if
there's
any
questions
about
that
or
our
connector
ecosystem,
I'm
happy
to
help
out
thanks.
Denny.
A
Oh
yeah,
that's!
Okay!
Thank
you
very
much!
Vinnie
hi
everybody.
My
name
is
denny,
I'm
a
developer
at
with
your
databricks
long
time,
spark
and
delta
lake
guy.
So
hopefully
we
can
answer
your
questions,
so
we
do
actually
have
a
question
coming
in
already
from
youtube.
So
it's
great,
hey,
george,
how
you
doing
you
your
question
here
is:
do
you
have
a
document
or
talk
that?
Will
we
can
point
to
or
explain
what
data
type
to
data
like
evolutions
are
permitted?
Okay,
now
they're?
A
Actually
I
wanted
to
make
sure
that
we're
talking
the
same
language
just
to
make
sure
okay
you're
talking
about
schema
evolution
within
the
context
of
context
of
delta,
like
I
believe
that's
what
you're
referring
to
so
is
that
the
context
you're
talking
about
so
if
you
could
just
follow
up
in
youtube
just
to
let
us
know
that's
what
you're
referring
to.
A
Then
we
can
probably
answer
your
question
actually
all
right
and
meanwhile,
I'm
actually
going
to
go
ahead
and
actually
find
the
blog
post
that
I
believe,
george
will
want
to
actually
work
with.
So
give
me
one
second
to
find
it
all
right.
So
so
george,
I
know
you've
asked
a
second
question.
Okay,
but
I
want
to
still
answer
the
first
question:
first,
okay,
which
is
considering
the
scheme:
evolution.
Okay,
perfect,
you've
answered.
Yes,
that
is
what
you
want,
so
I
have
pasted
into
the
youtube
link
all
right.
A
This
blog
called
diving
into
delta
lake
schema
enforcement
and
evolution
all
right.
So
halfway
down
in
the
blog,
we
actually
go
ahead
and
talk
about
this,
how
the
scheme
of
enforcement
works
and
how
they
differ
from
column
data
types
to
the
target
table
things
of
that
nature.
Okay,
so
this
is
a
good
starting
point.
The
second
starting
point
actually
would
be
specifically
by
going
ahead
and
working
with
the
delta
dot
io.
If
you
go
there,
we
have
the
documentation
there,
so
you
can
get
and
dive
into
it.
A
A
The
the
four
of
us
you
see
here,
plus
a
bunch
of
other
dell
delay
contributors
to
go
ahead
and
answer
your
questions,
particularly
so,
for
example,
you're
specifically
asking
questions
about
translating
date
to
time
stamp,
and
so
I
can
tell
you
up
front
and
qp
and
vinnie
and
scott.
You
can
definitely
add
into
it
the
key
issues
that,
when
you
come
to
that,
is
that
and
depending
on
how
you're
looking
at
sometimes
you
look
that
as
a
string.
A
Sometimes
you
actually
do
in
fact
look
at
this
as
a
date
or
as
a
time
stamp.
So
there's
actually
a
lot
of
idiosyncrasies
when
it
comes
to
the
conversion
of
these
timestamps,
because,
even
though
you're
potentially
using
pi
spark-
because
you
had
mentioned
in
your
second
question-
that
using
a
pi
spark
method.
The
problem
that
you
really
running
into
is
that
there's
still
that
conversion
of
the
pi
spark
method
want
to
go
back
into
scala
classes
and
what
we're
doing
underneath
the
cover
is
actually
doing
it
in
scala.
A
So
that's
that's
pretty
much
the
basic
context
for
the
other.
Three
anything
that
you
think
I
could
that
I
missed,
or
you
could
add.
C
Think
there's
one
more
resource
I
can
point
to.
Our
early
release
has
a
chapter
which
can
dive
deeper
into
schema
as
well.
A
Oh
excellent,
that's
a
good
call
out
vinnie.
Why
don't
you
log
into
the
youtube
link
and
and
or
either
socket
to
me
and
I'll
prop
it
directly
into
the
into
the
interview?
Oh.
A
D
A
Oh,
that
is
a
great
question,
so
in
order
to
join
the
the,
let
me
go
ahead
and
find
the
link
right
now,
but
if
you
go
to
delta
dot
io
and
you
look
at
the
bottom
of
it,
there
is
actually
a
big
gigantic
one
called
slack
channel,
I'm
going
to
go
ahead
and
copy
that
link
and
I'm
going
to
paste
it
directly
into
into
the
youtube
channel.
So
that
way
you
have
it.
So
that's
a
great
call
out
qp.
A
So
yes,
a
lot
of
us
as
noted,
because
while
george
is
definitely
here,
we
also
do
want
to
call
out
that
there
are
other
people
who
are
going
to
watch
this
video
post
this
session.
So
because
we're
doing
things
like
that,
then
that's
exactly
the
point
like
you
know,
for
for
posterity,
we're
going
to
leave
that
all
there
for
everybody,
okay,
okay,
let's
see!
Is
there
a
price?
Remember
that
consistently.
A
This
thing
is:
okay,
hey
george,
your
next
question
here
is
there
a
pi
spark
method
that
can
set
the
metadata
property
of
an
object
similar
to
this?
To
the
same,
that
sets
the
meta
property
via
the
alias
for
a
column
to
pass
it
on
to
the
metadata
for
the
delta
table.
A
Okay,
jordan,
I'm
not
entirely
sure
the
context
of
that
particular
question.
So
there
is
this
concept
of
user-defined
metadata
that
you
can
absolutely
go
work
with,
and
you
can
absolutely
change
that
user-defined
metadata
irrelevant
of
whether
you're
aliasing
or
accessing
it
directly.
But
I
I'm
honestly
not
sure
what
the
context
of
that
particular
question
is.
So
if
you
can
potentially
add
some
additional
context
to
this,
we'd
be
glad
to
try
to
dive
into
it,
but
not
entirely
sure.
That's
awesome!
Yeah.
A
Cool,
so
vinnie
has
graciously
also
reminded
me
that
we
have
this
document
which
I've
faced
into
youtube
and
also
the
the
data
ender's
got
data
engineers
guide
to
apache,
spark
and
delta,
like
so,
we've
posted
both
of
them
there
as
well.
So
hopefully
that
will
provide
the
additional
contacts
that
you
could
use.
Okay,
any
other
questions
that
we
can.
A
Okay,
let's
see
the
park,
k
format
hasn't
okay,
so
george,
I
want
to
let
you
type
it
out,
because,
yes,
I
do
agree
that
the
parquet
format
has
an
object
property
for
metadata,
but
I
guess
what
I'm
trying
to
understand.
What
is
the
context
of
your
question
in
terms
of
working
with
the
the
that
metadata?
Are
you
talking
about
the
user
define
metadata
where
we
actually
put
a
json
object
directly
into
the
parquet,
or
are
you
talking
about
within
the
transaction
log
itself
when
it
comes
to
delta?
A
Like
so,
I
guess
I'm
trying
to
understand
a
little
bit
better.
So
meanwhile,
I'm
going
to
go
ahead
and
I'm
a
little
tired
of
listening
to
my
own
voice.
So
I
did
want
to
go
ahead
and
ask
myself
ask
others
in
the
panel
these
questions.
So
for
you,
scott.
Let
me
start
with
you,
which
can
you
tell
us
a
little
bit
about
the
delta
standalone
reader
and
the
standalone
writer,
and
what's
the
current
status
of
these
projects.
D
Sure,
thanks
for
the
question,
denny
yeah,
so
the
standalone
reader
was
a
great
project
that
was
released
q4
last
year
and
what
it
enables
people
to
do
is
in
a
single
jbm
without
using
spark
and
all
of
its
complicated
transitive
dependencies.
You
have
to
worry
about.
You
can
have
a
lightweight
library
that
lets
you
read
from
a
from
a
delta
table,
so
it
gives
you
all
the
same.
Transactional
guarantees
like
with
multi-right
multiple
writers
in
a
single
jbm
as
delta
oss
and
you
know,
implements
the
proper
delta
protocol.
D
So
you
can
read
everything
and
it's
and
it's
completely
compatible
with
other
versions
of
delta,
oss
and
dvr
databricks
runtime
yeah.
So
that's
a
great
project
and
what
we're
working
on
now
is
the
delta
standalone
writer.
So
it's
the
same
thing,
but
with
writing
so
looking
targeting
for
that
to
to
come
out.
You
know
earl
sorry
later
this
year,
just
but
it's
a
work
in
progress
right
now,
yeah.
If
people
have
any
questions
about
that,
I'd
be
happy
to
answer
them.
B
Scott
for
the
standalone
alone,
reader
and
writer,
they
share
the
same
core
scala
code
base
as
the
the
spark
one
right
yeah.
D
Great
question:
they
actually
don't,
so
we
had
to
make
this
project.
The
standalone
project
is
part
of
the
connectors
repository
as
opposed
to
the
delta.
I
o
delta
delta
lake
repository
so
they're
separate
because
we
are
re-implementing
all
the
same
transactional
guarantees
and
asset
guarantees
that
delta
oss
provides,
but
without
using
spark.
So
if
you
are
someone
who
wants
to
write
a
connector
for
flank
and
presto
athena,
you
name
it
you're
able
to
do
that
without
having
this
bulky
spark
dependency
in
your
project.
Awesome
yeah,
great
question.
A
D
A
Right
and
just
to
harp
on
this
point,
just
a
little
tiny
bit
more
flink
and
we're
currently
utilizing
it
for
the
flink
and
presto
connectors.
Also
we're
doing
adding
in,
I
believe,
an
iterator
for
beam
and
pulse
sir.
I
knew
there
was
one
other
one
that
was
just
trying
to
remember-
oh,
my
goodness
so
pretty
cool,
but
then
now
I
want
to
switch
gears
and
I'm
going
to
switch
over
to
uqp,
hey
qb.
A
B
So
similar
to
what
scott
did
with
jvm
right,
so
the
big
data
system-
it's
mostly
jvm
based
so
the
standalone
java
reader
writer-
will
basically
be
ready
to
plug
and
go
into
any
jvm
based
framework
or
systems.
Now
there
are
a
small
edge
case
where
sometimes
you
might
go,
want
to
go
a
little
bit
more
extreme
and
not
want
to
use
jvm
at
all
for
for
whatever
pipeline
you're
building
and
that's
what
the
delta
rust
implementation
comes
in.
B
It's
all
implemented
in
rust
and
can
be
compiled
to
a
single
native,
dynamic
libraries
or
binaries
that
you
can
deploy
to
any
kind
of
environment
without
even
having
to
depend
on
jvm,
and
that
was
the
main
goal
of
the
project
and
the
first
application
production
application
that
we
built
at
script
was
leveraging
this
binding
to
build
a
ingest
ingestion
daemon
that
ingests
data
from
kafka
and
save
it
directly
into
delta
tables,
which
replaces
some
of
our
really
heavy
duty
spark
streaming.
A
B
C
Yeah,
so
just
to
the
panel
list
right,
we
talk
about
asset
transactions,
a
lot
right
in
practical
scenarios.
Why
do
you
care
about
asset
transaction?
If
you
can
educate
viewers
about
this.
B
So
so
the
I
think,
the
big
there
are
a
couple
I
mean
benefits
the
one
of
the
biggest
benefit
that
we
got
with
s3
is
in
s3,
it's
eventually
consistent
and
with
this
kind
of
exit
guarantee,
it
makes
it
a
lot
easier
to
do.
Multi-Reader
multi-writer
scenarios
right.
We
don't
expect
it
to
have.
B
You
know
a
hundred,
a
thousand
writers
in
this
case,
but
you
know
just
having
five
or
six
writers
that
that
delta
can
easily
handle
that
kind
of
multi-writer
scenarios,
and
it
makes
it
really
easy
to
build
more
complex
data
pipelines
with
multiple
consumers
and
producers
end
to
end
and
another.
D
A
So,
actually,
I
did
have
a
follow-up
question
for
qp
actually
so
specifically
for
the
delta
rest
api.
One
thing
I
was
curious
about
is
with
the
kafka
delta
ingest.
Is
the
plan
then
also
to
take
some
of
that
code
base
and
put
it
back
into
like
rust,
api
core?
So
that
way,
like
you
know,
forsake
argument,
I
could
potentially
do
use
the
python
bindings
without
the
rest
of
both
read
and
write
to
delta.
Like
I'm
just
curious,
what
what
the?
What
does
that
look
like
yeah,
so.
B
The
rust
core
today
can
we
can
already
do
both
read
and
write
and
also
checkpointing,
and
we
ex,
I
think
we
only
exposed
the
read
apis
to
python
by
date.
So
another
really
good
benefit
of
writing.
This
whole
thing
in
rust
is
rust,
basically
can
really
easily
easily
export
c
compatible
apis.
So
any
language
that
binds
with
c,
which
is
almost
all
the
languages,
can
can
leverage
the
same
rust
core
and
just
have
get
access
to
delta
lake
with
by
writing
a
really
thin
layer
of
binding.
B
So
now
the
python
native
python
running
was
one
of
the
applications
as
well.
There
are
already
a
couple
contributors,
that's
already
using
the
python
binding
in
production,
but
today
we
only
have
read.
We
only
expose
the
apis
right.
So
what
we
have
in
kafka
delta
interest
today,
we
do
have
a
set
of
higher
level
write
apis.
That's
that's
only
in
that
repo.
So
in
the
long
run
we
don't
we
do
want
to
provide
a
high
level
write
api's
in
the
delta
rs
core,
so
that
it's
easy
for
people
to
use
right
now.
B
It
only
has
a
really
low
level
setup
right
apis,
where,
in
order
to
use
that
you
have
to
have
a
fairly
good
understanding
of
how
delta
protocol
works,
but
we
do
plan
to
improve
that
in
the
future.
B
Ls6
the
stand
on
a
java
reader,
it's
more
feature
complete
than
the
rust
one.
That's
for
sure
we're
still
catching
up.
A
All
right,
I
would
have
shift
gears
a
little
bit
to
vinnie
now
so,
hey
vinnie,
I
believe
you
and
I
had
done
a
session
last
week,
a
little
bit
about
the
delta
lake
oss
road
map.
I
believe
that's
what
was
going
on
so
I'd
love
you
to
go
ahead
and
provide
the
the
folks
here
a
little
bit
of
context.
C
Yeah,
that's
that's
a
great
call
out
danny.
So
if
you
attended
last
week's
oss
session,
we
kind
of
gave
an
overview
of
what
we
have
accomplished
so
far
in
delta
lake
oss
and
then
what
is
coming
up
so
qp
and
scott
talked
about
reader
and
writer
right
so
because
we
have
delta
lake
reader,
a
standalone
reader
and
writer
standalone,
meaning
now
they
have
no
dependency
over
a
spark.
The
good
thing
is,
you
can
expand
the
ecosystem
right.
C
So
what
that
brings
is
now
we
can
build
a
connector
for
flink
nessie,
which
is
which
is
we
are
having
meetings
with
dremeo.
We
are
building
pulsar
connector
lake
fs.
What
else
rust,
of
course,
kafka
delta
ingest,
which
qb
already
covered?
And
oh,
if
you
are
interested
in
pulsar
ryan
from
delta
team,
is
going
to
do
a
keynote
in
pulsar
summit?
Did
I
miss
anything
apache
beam,
so
we're
going
to
have
something
over
there
soon.
C
A
Excellent,
well,
that
seems
like
a
lot
and
it
is,
and
so
the
one
thing
I
definitely
want
to
add
to
every
for
everybody
here
is
basically,
if
you
want
to
get
involved
with
any
of
these
things,
we
have
a
very
active
delta
community,
there's
more
than
I've
already
lost
lost
track
of
more
than
5
000
people
on
the
delta
user
slack.
So
yeah,
like
you
know,
so
it's
not
just
forums
like
these.
You
know
definitely
talk
to
us
in
slack,
we're
there
to
answer
questions
or
if
you've
got
a
particular
project.
A
For
example,
we
recently
got
pinged
by
the
for
our
apache
heron
right
now
about
potential
integration
with
that
yeah
we're
glad
to
listen.
So
please
go
ahead
and
chime
in
and
let
us
know
what
you're
interested
in
doing-
and
I
think
that's
it
for
today
unless
there's
there's
no
more
questions
coming
in
from
youtube,
so
I
don't
want
to
drag
this
any
longer
that
we
need
to
qp
or
vinnie
or
scott
anything
else
that
we
might
have
missed
that
we
should
have
pulled
out.
I.
C
Will
just
ask
this
question
denny
like
a
lot
of
watchers,
and
you
know
the
excitement
that
we
bring
in
the
community
from
you
know,
yours,
your
standpoint,
anybody
right!
What
does
it
take
to
get
excited
about
contributing
to
the
code
and
what
are
the
guidance
you
would
give
to
all
the
watchers
here
yeah
easy
way
to
get
started
to
contribute.
Yeah.
A
Oh
sure
I'll
start
a
little
bit,
but
then
I
definitely
want
scott
yourself
and
and
and
qp
to
actually
provide
their
context.
Okay,
so
I'll
start
by
the
same
as,
like
you
know,
there's
a
lot
of
like
first
like
first
timer
github
links.
Okay,
github
issues
get
a
pr's
that
we
we
tag.
In
fact,
actually
we're
gonna
go
through
a
pr
bash,
probably
early
next
week,
mid
to
next
week.
Actually
excuse
me
where
we're
gonna
go
through
all
of
them
again
just
to
reupdate
them
so
say:
hey,
you
know.
A
A
So
that
way
we
can
really
get
into
it
and
so
saying
that,
as
opposed
to
listening
to
me
about
it,
I
want
to
switch
to
qp
because
qp
you
and
tyler
started
that
process
for
the
delta
rest
api.
So
why
don't
you
want
to
provide
a
little
bit
of
context
from
where?
How
you
guys
did
it?
What
you
liked
about
it?
You
know,
and
yes,
you
can
go
ahead
and
you
know
ding
me
for
what
I
screwed
up
when
I
was
trying
to
help
be
helpful.
Okay,
usually
helpful!
So,
yes,
please
go
ahead.
B
Danny
yeah
you've
done
a
really
good
job.
There's
nothing
much!
I
can
complain
about
you
for
me.
That's.
D
B
The
main
reason
that
the
main
driver
for
me
was-
I
actually
have
a
real
use
case
that
I
would
like
to
that.
I
think
the
rust
binding
will
be
a
good
use
for
so
it's.
For
me,
this
is
not
just
downtown
lake
in
general,
but
I
think
for
any
open
source
project.
You
want
to
get
involved
with
it's
best
when
you
for
you
to
start
with
a
clear
goal
in
mind
like
how
will
what
what
problem
do?
B
I
want
to
solve
with
this
project
right
and
starting
from
there,
it's
easy
for
you
to
find
either
lacking
features
or
bugs,
while
you're
adopting
this
project
with
the
problem
that
you're
trying
to
solve.
So
this
is
what
the
main
drive
for
me
and
how
I
just
got
started.
B
Was
I
basically
just
started
looking
at
the
the
the
delta
specification
in
the
main
delta
core
report
right,
there's
a
really
well
maintained,
readme
file
that
described
exactly
what
the
protocol
expects
and
that's
a
good
great
way
to
start
or
you
can
read
the
the
delta
paper.
I
think
it's
also
a
good
paper
to
read.
So
these
are
the
this.
This
is
what
I
would
recommend
you
to
get
started
if
you're
comfortable
with
that
kind
of
details.
A
Awesome
thanks
very
much
vinnie
anything
you
want
to
add
and
then
we'll
we'll
finish
with.
Actually
you
know
what
I'll
do
the
way?
Scott
you
want
to
add
and
then
vinnie
will
finish.
Video
then
yeah
scott.
Anything
you'd
like
that.
D
Yeah
well
set
qp
yeah.
Just
to
emphasize
you
know,
use
use
our
products
use
our
user
code
base
and
you
know
begin
with
the
end
in
mind
and
you'll
probably
have
like
some
projects.
You
want
to
work
with
and
throughout
the
way
or
along
the
journey.
You'll
you'll
discover
some
bugs
or
you'll
discover
a
feature
that
you
really
want.
D
So
then
go
for
it.
You
know
make
that
issue,
make
that
pr
and
discuss
with
us.
You
know
I've
been
working
with
a
committer
over
the
past
couple
weeks,
who's
adding
a
really
great
feature
to
the
standalone
reader.
You
know
they're,
adding
some
functionality
that
wasn't
there
and
I'm
great
to
have
that.
Have
that
discussion
because
we're
both
learning
from
each
other
and
I'm
looking
forward
to
more
people
contributing.
C
Oh
for
me,
how
I
contribute
is
literally
like
what
scott
mentioned
it
starts
with
finding
an
issue
or
a
bug.
I
really
get
excited
when
I
have
to
work
with
some
engineers
to
to
solve
the
issue,
and
previously
I
was
working
with
a
lot
of
customers,
so
I
have
worked
with
like
100
plus
databricks
customers
and
what
gets
me
excited
is
problem
solving.
I
would
just
look
at
their
issue
and
I
will
try
to
find
solution
for
them,
whether
it's
like
a
product
feature
or
maybe
it's
an
infrastructure
related
issue.
C
A
Excellent
perfect.
Well,
I
think
this
is
a
great
way
to
end
today's
session.
So
again
I
want
to
thank
you
the
panel
here
I
want
to
thank
you,
p,
vinnie
and
scott
for
taking
time
out
of
your
day
to
go
ahead
and
join
today's
delta
lake
community
office
hours.
If
you
have
any
questions
and
that
we
did
not
answer
inside
the
youtube
channel,
especially
george
glad
to
see
you
the
join,
you
joined
the
delta
user
slack
during
this
session.
So
we're
pretty
happy
about
that.
A
Please
join
us
at
the
delta
user
slack
and
we
will
hopefully
be
able
to
help
answer
all
your
questions
again.
Thank
you
very
much
and
have
a
good
day.
Everybody
thank.