►
From YouTube: Creating Tableau Datasources
Description
In this workshop we discussed:
Mapping a Sisense query to Tableau Datasource
Building a Datasource using Tableau Data Modeling with Relationships and Joins
Published Datasource Best Practices:
Removing Columns
Renaming Columns
Creating Calculated Fields
Grouping Columns
Datasource Filters
Connection Credentials
Datasource Extracts
A
Awesome,
thank
you
again
for
coming
today,
I'm
hoping
to
go
through
kind
of
the
Basics
and
best
practices
for
creating
data
sources
in
Tableau,
specifically
I'm,
going
to
take
a
minute
at
the
beginning
to
talk
about
how
we
would
break
down
a
sisense
query
in
thinking
about
it
in
terms
of
a
Tableau
Data
source,
and
so
you
can
follow
along
I've
kind
of
given
an
outline
in
the
agenda.
But
it's
going
to
be
talking
about
that.
A
Mapping
assigns
query
to
Tableau
Data
source,
then
we're
going
to
work
through
building
a
data
source
and
talk
about
a
couple
elements
of
that
relationships
and
joins,
and
then
we're
going
to
kind
of
as
we're
looking
at
that
data
source
we're
going
to
go
through
a
list
of
kind
of
best
practices
for
creating
data
sources.
So
I'm
going
to
start
with
sharing
my
screen
and
we're
gonna.
If
you
want
to
follow
along
or
look
at
the
size,
same
size,
sense,
query!
It's
linked
in
the
agenda.
A
Awesome
so
this
is
the
link
to
say
sense:
query
the
on
the
surface
right.
This
is
a
very
simple
query,
but,
as
you
may
know
that
these
are
nested
items,
so
what
we
we
need
to
look
at
this
whole
query,
but
we
also
need
to
look
at
this
part
of
the
query,
so
we
can
completely
understand
what's
happening.
A
So
as
we
look
through
this
we're
looking
for
a
few
things,
we're
looking
for
tables
we're
looking
for,
joins
we're
looking
for
custom
columns
or
calculations,
we're
looking
for
any
place,
we've
renamed
a
column,
generically
speaking
everything
that
happens
before
the
final
select
statement.
This
final.
So
before
this
that
happens
in
ctes.
All
of
this
all
of
this
we're
going
to
convert
into
a
data
source
in
tableau,
there's
a
little
bit
of
back
and
forth
like
the
filter.
We
need
to
make
sure
that
the
filters
that
were
here
so
these
filters
can
be
answered.
A
We
need
to
make
sure
that
fields
that
we're
going
to
filter
on
later
need
to
get
passed
all
the
way
through,
but
the
afterwards.
So
after
that
final
CTE,
most
of
this
stuff
here
can
be.
You
know
even
all
these
filters,
those
can
be
done
in
the
visualization,
so
we
make
a
data
source
and
then
we
can
make
a
chart,
and
so
what
we're?
Seeing
in
this
part
of
the
query
we
should
be
able
to.
A
So
some
differences
where
here
there's
a
couple
of
filters
on
the
data
source
itself,
so
those
will
be
baked
in
so
that
someone
couldn't
change
them
like
we
want
this
data
source
to
always
be
filtered
to
this,
for
example,
or
age
in
days,
is
always
greater
than
or
equal
to
zero,
and
then
someone
could
make
a
vis
off
of
that
data
source,
but
they
couldn't
change
that
filter.
That
filter
will
be
baked
into
the
data
itself,
as
opposed
to
having
one
of
these
filters
where
the
state
is
opened.
A
A
These
are
things
that
we
can
then
translate
into
a
data
source,
because
we
want
to
remember
that
one
of
the
primary
purposes
of
creating
a
data
source
is
to
create
a
curated
experience,
curated
environment,
so
that
individuals
aren't
aren't
going
in
and
creating
these
queries,
but
they
have
access
to
the
data
that
they
need
so
like
what
we
do
here
with
this
query.
This
is
this
is
reducing
the
number
of
total
available
columns
where
we're
going
to
rename
them
so
they're
meaningful.
A
Any
questions
kind
of
on
this
mapping
there
it's
a
lot
more
complicated
than
that.
There's
lots
of
little
nuances
to
get
into,
but
generically
aren't
there
any
questions.
B
B
A
Do
you
think
our
Guidance,
the
guidance
right
now
and
the
principle
that
we're
trying
going
to
be
working
towards
is
business
logic
in
code
now,
so
that
would
lead
us
to
say
anything
that
we
can
put
materialize
in
any
table
in
the
in
the
or
in
the
code
say
in
DBT
we
want
to,
but
there's
always
going
to
be
some
level
where
we
will
have
calculate
calculated
Fields
within
the
within
the
visualization
tool,
things
that
have
to
be
calculated
based
on
what
you're
building,
and
so
those
are
always
going
to
be
there,
but
our
Prince
with
what
using
the
principles
or
guidance
we're
going
to
minimize
those.
A
So
these
are
these
are
pre-calculated
Fields.
These
are
a
good
example
that
these
could
just
exist.
Probably
in
the
data
source.
They
don't
depend
on
the
granularity
within
the
visualization
itself.
They
aren't
dependent
on
anything
like
that,
so
these
could
probably
go
down
into
the
database
tables,
but
there's
there's
never
going
to
be
a
rule
that
says
you
can't
use
calculated
columns
in
tableau.
A
A
A
They
are
more
brittle
and
again
they're
hiding
business
logic
underneath
all
these
layers
of
Tableau
and
they're
very
hard
to
get
to
and
hard
for
people
to
document
and
look
at
and
they're
non-transparent
to
put
it
into
a
gitlab
value.
So
we
want
to
try
when
we
need
to
build
a
data
source
in
Tableau.
We
want
to
use
Tableau
Data
modeling.
A
So
let's
look
at
that.
This
one
I'll
start
with
a
new
sheet
here
and
I've
linked
a
data
source
I've
linked
a
model
rather
I'll
go
back
to
that
now.
Actually,
just
look
at
that.
First,
this
one
yes
so
linked
to
model
is
issues.
History
model
I'm
not
going
to
try
and
recreate
this
entire
thing.
There's
lots
of
business
logic
in
here
I'm
using
it,
mostly
because
it's
tables
that
you
might
be
familiar
with
and
specifically
I'm,
going
to
try
and
recreate
a
few
of
these
joins.
A
Maybe
just
one
depending
on
what
kind
of
questions
that
we
have
so
as
we
look
at
this
model,
we're
seeing
I'm
saying
I'm
going
to
look
I've
practiced
this,
but
I'm
going
to
so
let's
take
a
look
at
this
one.
This
is
a
very
so
we
need
the
label
groups
and
it's
getting
joined
to
looks
like
there's
a
connection
between
issues
here
and
dates,
and
if
we
follow
those
back
up,
we
can
see
what
the
tables
are.
So
we
have
this
issues,
internal
issues,
enhanced.
A
We
need
label
history
and
we're
going
to
be
looking
at
dim
date.
The
idea
is
we're
going
to
try
and
again
we're
using
metabolo
data
modeling
we're
going
to
rebuild
this
talk
about
how
how
we're
doing
that
so
as
I
go
into
Tableau
I'm,
going
to
start
start
from
the
Fresh
I've.
Already
I've
already
done
this,
so
we
can.
If
I
mess
up,
we
can
always
just
focus
on
that,
but
we're
going
to
start
start
from
start
from
the
beginning
by
adding
a
new
data
source.
A
So
as
we
select
snowflake-
and
this
is
our
snowflake
server,
one
thing
I
want
to
point
out.
That
will
be
important.
Is
you
especially
as
you
create
data
sources,
is
to
leave
this
role
blink
if
you
supply
your
own
role,
it's
not
possible
for
other
people
to
authenticate
if
they
need
to
go
in
and
edit
it.
But
if
you
leave
this
optional,
it
will
just
bring
up
there,
the
oauth
and
then
anyone
else
who
would
have
access
to
that
data
would
be
able
to
then
authenticate,
and
maybe
edit,
that
data
source.
A
That's
all
happening
on
my
other
screen,
so
it's
happening,
I
promise
all
right
as
we're
building
a
data
source
that
is
intended
to
be
used
for
my
multiple
people.
It
is
all
right
for
us
to
use
the
reporting
Warehouse
and
if
you
yourself,
don't
have
access
to
that
we
can.
We
can
submit
access
requests
to
do
that,
so
that
you
can
develop
through
that.
That
is
the
same
warehouse
that
we're
using
precisense
queries.
A
A
It
was
the
issue
internal
issue
enhanced,
or
is
that
what
we're
building?
No
I
don't
remember.
A
So
this
gives
us
this
object
that
represents
our
table.
This
is
actually
one
layer
of
abstraction
above
the
table
and
we'll
get
into
that.
Second,
it's
just
important
for
you
to
kind
of
recognize
this
and
get
familiar
with
what's
going
on
here,
but
we
can
add
filters
directly
to
this
whole
process
here.
A
This
whole
canvas
here
is
what
we
refer
to
as
tablet
the
table:
data
modeling
space-
you
can
add
filters,
you
can
see
the
columns
and
then
you
could
refresh
the
data
to
see
a
sample
of
what
that
data
would
look
like,
but
one
of
the
things
we
want
is
the
label
history.
So,
as
we
bring
this
out
in
this
interface,
we
see
that
it
immediately
gets
connected
to
it,
and
what
this
kind
of
connection
is
called
is
a
relationship
relationships
are
a
level
of
abstraction
above
table
joins.
A
A
So
if
we
go
back
to
here,
we're
gonna
look
at
severity.
Now
we
don't
have
to
worry
about
left,
join
inner
join
Tableau
with
this
extra
level
of
abstraction
table
is
going
to
basically
take
care
of
that
for
us
again.
We
can
kind
of
fine
tune
it
with
our
mini
to
minis,
but
we're
not
going
to
worry
too
much
about
that
now.
This
first
one
could
be
done
as
a
filter.
A
We
could
just
say:
hey
filter
this
labels
table
to
where
label
type
equals
severity,
but
we
should
also
be
able
to
apply
it
in
the
join
as
a
only
bring
in
the
severity
type.
I
haven't
tried
this,
but
this
is
how
what
it
would
look
like.
I
mean
I've
built
it,
but
I
haven't
actually
done
it.
So
we
would
come
over
to
our
labels,
history
label
type,
and
we
want
that
to
equal
a
calculated
field
of
severity.
A
I'm
going
to
move
this
over
to
my
second
monitor,
so
I
can
reference
it
more
easily.
We
want
issue
ID
and
dim
issue
ID,
so
we
want
dim
issue
ID
and
issue
ID.
We
want
these
to
be
equal
now,
the
next
to
join,
I
guess
I'll
bring
you
back
from
this
comment.
Oh
good,
thank
you
for
scrolling
away
from
me.
The
next
join
actually
is
on
to
the
dates,
but
we
don't
have
the
dates
brought
in
yet
so
we
can't
actually
yet
perform
this
relationship.
A
We
can
actually
build
this
relationship
so
now
I'm
going
to
show
you
the
second
part
of
second
way
you
can
build
tables
and
connect
them
together
in
the
type
of
data
modeling
space.
So
with
each
of
these
objects,
as
I
said,
they
are
an
abstraction.
A
We
can
actually
go
into
them
and
this
item
here
represents
the
table
itself,
and
so
when
we
want
to
do
a
direct
join,
we
want
to
come
in
here,
and
this
is
where
we
will
perform
the
joins,
and
we
want
to
join
the
dim
date
here
directly
and
I'm,
not
saying
that
this
methodology
is
the
best
practice
on
how
you
I
would
exactly
build
this
for
this
use
case
I'm,
using
this
as
an
example
of
how
to
show
you
the
different
methods
of
combining
things.
So
it's
not
a
tutorial
on
building
this
table.
A
It's
a
guide
on
how
to
build
tables,
how
to
build
your
data
sources.
So
as
we
bring
this
one
out
and
we
drop
it
into
this
space
into
the
canvas,
it's
going
to
say.
Okay,
let's
join
the
table.
How
do
you
want
to
join
it
based,
on
our
other,
the
query
we're
trying
to
match
we're
going
to
do
an
inner
join
and
we
want
it
to
be
on
I'm,
simplifying
the
query:
that's
the
join!
A
A
Now
something
to
note
and
we'll
get
into
we'll
have
a
problem
with
it
if
we
later,
but
these
are
different
data
types-
one
created
out
is
a
timestamp
data
actual
as
a
date,
and
so
we
want
to
make
sure
that
these
are
going
to
match
up
and
the
way
we
did
that
in
the
query.
Where
we
do
it
in
the
queries
we
would
truncate,
we
did
a
date
trunk
on
the
created
at
so
we
can
do
the
same
thing
here
or
we
can
go
into
calculation
and
we
can
apply
a
date
trunk.
A
The
syntax
is
different
than
snowflake
syntax,
just
a
little
bit,
but
a
lot
of
the
same
functionality.
Some
some
of
the
same
functionality
is
there,
but
we
can
apply
a
date
Trump
to
this
I'm
at
time
of
calculation,
and
so
we
know
that
that's
going
to
get
the
day
of
the
created
that
and
that's
going
to
be
equal
to
the
data
actual,
and
we
also
want.
A
A
And
as
we
look
as
we've
started,
building
this
we
can
come
down
here
and
see
that
it's
joining
the
tables
together
and
we
could
even
do
an
update
of
the
table
to
see
the
data,
but
it's
showing
us
all
the
columns
from
all
of
the
tables
and
we'll
get
into
to
those
in
a
second
but
I.
Just
wonder
this
right
now
is
is
joining
it.
So
now
that
we've
done
that,
we
can
see
that
the
icon
here
has
changed.
A
A
To
be
what
is
it
we,
the
SQL
query
used
between,
so
that's
just
a
we
want
it
to
be
greater
than
or
equal
to
label
valid
from.
A
A
A
A
So
now
this
is
working.
We
have
our
tables,
we
have
one
table,
that's
joined
directly,
we
know
the
table.
That's
related
to
the
table.
Relationships
are
valuable
when
you
might
need
to
analyze
the
table
separately
and
together
so
especially
like
to
say
if
we
join
these
together
and
it
changes
the
grain
of
the
table,
it
increased
the
number
of
rows
and
that
would
throw
some
sums
or
some
averages
or
anything
like
that,
but
so
having
them
only
related.
If
I'm,
only
looking
at
data
say,
I
want
to
look
at
counts
of
labels.
A
I
could
look
at
that
without
the
grain
change
that
I
get
from
drawing
into
internal
history,
and
it
would
only
query
the
label
history
table,
it
wouldn't
actually
perform
that
join.
So
that's
why
relationships
can
be
valuable,
whereas
if
we
did
just
the
inner
join,
we'd
have
to
do
something.
Funky
like
okay
I,
have
to
remember
to
do
a
count.
Distinct
or
I
have
to
do
some
sort
of
more
complicated
calculation
to
get
only
the
the
number
of
records
for
the
original
one,
and
so
these
this
abstraction
is
helpful.
A
For
those
reasons,
are
there
any
questions
on
this?
How
we
combine
tables
in
in
the
table
of
data
modeling
space.
A
A
A
All
right,
we'll
use
this
as
an
example.
Removing
columns.
A
With
again
the
idea
that
this
is
a
curated
experience,
we
want
to
make
sure
that
those
who
are
going
to
be
using
it
and
the
reports
that
are
going
to
be
made
from
it
only
have
The
Columns
that
they
need
or
might
need.
You
know
as
we're
building
something
that
is
is
intended
to
be
used
for
an
unknown
number
of
visualizations.
A
We
don't
just
limit
it
to
the
ones
we
know
we
can
expand
it
to
the
ones
that
should
be
of
value,
but
if
we
know
that
there
are,
you
know
things
that
won't
be
a
value.
We
can
go
ahead
and
eliminate
them
a
common
one
might
be,
and
we
don't
have
them
in
this
table
in
a
lot
of
tables.
There's
this
DVT
updated
at
which
may
not
be
relevant
for
a
lot
of
the
things
or
like
when
we
we've
joined
date.
Here
we
might
have
to
join
date
several
times
we
can
say.
A
A
Hiding
columns,
you
can
select
okay,
we
really
don't
need
any
of
these
date
fields
for
it
to
here,
and
we
can
just
hide
them,
but
so
it
leaves
us
with
a
much
more
paired
down
list
of
tables.
Maybe
maybe
we
don't
need
sub
to
label
subtype?
We
can
just
hide
them
as
you're
working
on
that
you'll.
You
know
this
is
where
the
curation
comes
in,
you
you
pick
and
choose,
and
you
can
always
it's
going
to.
A
It
all
depends
on
how
snowflake
is
going
to
interpret
the
query
as
well,
what's
needed,
but
generically
speaking
the
fewer
columns,
the
better
the
experience
both
for
the
end
user
and
for
the
queries.
So
that's
removing
column.
Let's
talk
about
renaming
I
suggest
that
it
is
renaming
columns,
and
so
any
of
these
columns
you
can
come
down
and
rename.
A
That
simply
and
the
idea
is
you
know,
Tableau
will
take
a
first
go
at
renaming,
so
we
can
see
here
that
the
original
name
is
Ellen
caps,
camelcase
snake
case.
It's
all
sneak
case,
because
that's
how
we
build
column
names
in
the
data
warehouse,
and
so
it's
going
to
try
and
recreate
that
or
kind
of
do
do
something
different.
But
maybe
it's
it's
put
this
in
sentence
case
or
a
proper
case,
and
this
is
what
it's
called
where
just
the
first
word.
A
First
letter
of
each
word
is
capitalized,
and
maybe
we
don't
want
that.
Maybe
we
don't
want
of
capitalized,
and
so
we
would
come
in
here
and
say:
no,
that's
not
proper,
that's
not
how
we
want
it
and
we
can
go
through
and
edit
them
after
you've
created
the
data
source
and
new
columns
are
added
like
say
in
the
database
table
they're
going
to
come
in
looking
at
the
database
name,
and
so
we
would
then
want
to
come
in
and
completely
rename,
but
you
can
rename
any
of
these
objects,
including
the
name
of
this
data
source.
A
So
if
we
so
right
now,
it's
it's
giving
us
the
full
full
detailed
name
and
really
wait.
This
was
we
were
building
was
issue
history.
We
could
just
rename
it
there
again.
The
idea
for
renaming
is
the
curated
user
experience,
so
you
need
to
think
through
who
should
be
using
this
table
and
who
could
use
this
table.
I
went
through
and
processed
a
data
source
for
the
sales
org
and
there's
a
lot
of
acronyms.
A
lot
of
acronyms
I
didn't
understand,
and
so
I
had
to
go
and
get
them
say.
Okay.
A
What
does
this
mean?
How
should
I
label
this?
Some
of
the
acronyms
stayed
some
of
the
acronyms
didn't
so
the
again,
the
idea
of
renaming
is
so
that
it's
clear
when
you
go
to
use
that
data
source,
what
it
means
and
what
it
is
so
keep
that
in
mind
as
you
go
through
it
and
label
things
like
URL
again,
if
it's,
if
it's
obvious
that
that's
the
Ura
URL,
that
should
be
fine,
but
is
it
what
URL
is
it?
A
Here's
an
acronym
sus,
impacting
you
might
know
what
that
is,
and
if
you're
the
only
audience
for
it,
that
might
be
fine
but
again
keep
in
mind
transparency.
If
someone
else
was
coming
to
use
this
table,
would
they
know
what
that
means?
A
A
You
can
add
a
more
full
description
here
where
we're
relying
mostly
on
the
DBT
docs,
to
provide
most
of
our
description.
So
you
wouldn't
have
to
worry
about
this
too
much.
But
if
you
had
a
complicated,
calculated
field
that
you
baked
into
the
data
source,
this
would
be
a
good
place.
You
know
it
would
be
good
to
document
that
here
and
let's
move
on
to
my
next
topic
around
creating
calculated
fields
in
the
data
source,
again
I'll
do
it
from
this
View.
A
A
A
Thing
we
could
create
this
calculated
field
again,
the
syntax
is
different
and
we'd
have
to
go
through
and
modify
it,
but
we
could
then
create
that,
and
it's
now
part
of
the
table.
A
Now.
What
so,
when
we
save
this,
this
field
will
be
available
for
use,
but
not
edit.
If
someone
is
just
connecting
to
the
data
source
to
to
build
a
data,
visualization
say
an
Explorer.
They
won't
be
able
to
edit
this
calculated
field,
they'll
be
able
to
create
their
own
new
ones,
but
they
won't
be
able
to
edit
this
one.
So
that's
something
that's
important
to
remember.
A
D
B
A
If
you're
using
the
relationship,
if
you
use
a
custom
SQL
to
define
the
columns,
it
will
be
less
performant
than
hiding
the
columns
on
a
if
you
just
drag
the
whole
table
object
in
the
reason
is
because
of
how
Tableau
has
to
treat
that
SQL.
Basically,
it
will
go
and
queer
use
the
custom
query
in
its
totality,
bring
that
in
and
then
do
any
other
relationships
with
it.
If
you
just
Define
that,
if
you
use
the
table,
object
to
Define
it
and
just
hide
the
columns,
then
it
will
never
go
query.
A
Those
columns
snowflake
will
know
if
the
column
is
used,
you
know,
and
it
won't
won't
bring
it
in
like
so
you
know
it's.
It's
snowflakes
are
smart
enough
to
help
out
there,
but
it's
also
again
remembering
there's
a
there's,
a
query
experience.
You
know
the
the
the
Builder
experience
the
Explorer
and
viewer
experience
there,
but
yes,
it
will
always
be.
It
will
be
more
performant
to
not
use
custom
SQL.
A
We'll
see
what
else
did
I
have
in
my
list
here
grouping
columns.
So
let's
say
we
need
all
of
the
columns
that
we
have.
A
You
know
that's,
for
example
like
when
I
brought
over
those
sales
marks,
they
had
well
over
100
columns
and
they
needed
them
all.
You.
C
If
you
have
a
really
complicated
column
in
Tableau
that
you
make
a
calculated
field,
it
will
be
more
performant
to
have
that
in
the
SQL
extract
than
it
would
be
to
have
it
in
there
so
like.
If
you
need
to
do
really
specific
stuff,
you
might
end
up
using
something
called
a
level
of
detail.
Calculation,
those
take
a
long
time
for
Tableau
to
process
and
if
you're,
using
extracted
data
sources.
If
it's
in
the
SQL,
it's
faster
to
have
it
in
the
SQL
of
the
X
extract.
C
A
Yeah,
there's
some
crazy
things
and
there's
even
and
it
might
be
relevant
to
you
I
noticed
like
in
it.
There
are
ways
to
just
pass
the
sequel
directly
down
to
snowflake.
For
example,
Tableau
doesn't
know
what
arrays
are
when
you
bring
in
an
array
field,
it's
just
a
string.
A
So
if
you
need
to
perform
an
array
calculation,
you
have
to
pass
that
raw
SQL,
that's
a
function
within
Tableau
and
those
aren't
as
performant
either,
because
it's
it
can't
do
any
of
its
own
optimization.
So
anytime
that
anytime,
you
can
do
something
directly
into
the
table.
It'll
be
more
performant,
I
mean
Tableau
is
good,
but
snowflake
having
it
already
materialized
as
a
snowflake
table,
will
always
be
more
performant.
So
anytime,
you
can
drag
those
down
better.
A
The
curating
user
experience,
so
we
have
multiple
tables
here
and
right
now.
All
of
the
fields
that
we
brought
in
are
grouped
by
those
tables
by
those
relation
abstracts
that
we
saw
earlier
and
so
right
now
they're
grouped
by
data
source
table.
But
let's
say
that's
not
the
grouping
that
we
needed.
So
we
go
here
to
group
by
folder
and
now
they're,
all
just
in
one
big
list
until
we
start
creating
our
folders
and
you
create
folders
by
just
selecting
The
Columns
you
want
and
it
is
folders
create
a
folder.
B
A
A
good
these
aren't
good
for
column
names.
This
is
an
example,
but
now
you
can
say
these
are
all
related
put
them
here
together.
So
as
you
build
them
out,
you
can
start
collapsing
them
and
save
that
as
the
default
State.
Everything
will
then
be
in
two
folders
that
should
make
the
user
experience
faster.
It's
like
okay
I
need
to
go
here
to
get
to
these
these
Boolean
values
and
again,
that
should
be
a
transparent
thing
when
they
come
in
they're,
not
like.
A
A
So
that's
what
that's,
how
you
group
things
again:
the
I
I'm
going
to
hammer
on
these
principles,
curated
user
experience.
So
for
someone
not
you
who
may
be
less
familiar
with
the
data?
How
would
you
want
to
group
these
to
make
it
easy?
That
should
kind
of
be
the
guiding
principle.
A
What
else
did
I
put
in
here
I?
This
was
kind
of
just
started:
I
started
listing
out,
grab
bag
of
things,
data
source
filters
again,
I
mentioned
at
the
beginning.
If
you
apply
a
filter
to
the
data
source
and
then
you
publish
it
as
a
data
source,
that
filter
will
be
there
for
everybody,
and
you
won't
be
able
to
change
it.
People
will
be
able
to
add
additional
filters
when
they
build
it
out,
but
they
won't
be
able
to
explore,
won't
be
able
to
affect
a
change.
A
You
can
add
data
source
filters
here,
so
we
would
say
date
actual
we'll
do
relative
date.
A
It's
going
to
take
a
minute,
and
let's
say
we
just
want
to
look
at
the
last
three
days,
and
so
if
we
save
that
and
then
we
publish
it,
this
data
source
will
only
ever
have
the
last
three
days
in
it
as
a
query:
it'll
only
ever
acquire
the
last
three
days
and
that's
all
that
will
be
available
to
someone
connecting
to
that
data
source
and
they
won't
be
able
to
extend
that
back
any
farther.
So
it's
supposed
to
it's
it's
a
double-edged
sword.
A
But
at
the
same
time
you
can
increase
performance
and
just
a
general
curated
exploration
capabilities
by
filtering
it
to
only
other
things
relevant,
that's
topical
if
I
say
for
this
data
source
that
you're
creating
I
mentioned
next
to
my
list
is
connection
credentials,
which
I
mentioned
where
we
want
to
leave
the
role
optional.
A
role,
blink
and
see
last
on
my
list
is
data
source
extracts.
A
However,
there
are
limitations.
We
have
a
total
storage
capacity
on
tableau
that
accounts
for
all
workbooks
all
data
extracts,
and
so
we
can't
just
extract
everything,
even
though
there
might
be
more
performant,
we
have
to
pick
and
choose
if
you're
developing
locally
and
you
want
to
use
an
extract.
Those
are
all
fine,
but
when
we
go
to
publish
it,
then
we
will
so
default
to
a
live
connection,
because,
as
you
extract
something
again,
it's
going
to
take
a
copy
of
that
data,
but
these
are
also
valuable.
A
So
it's
like
I'm
trying
to
teach
you
that
how
to
use
them
but
warn
you
against
using
them,
or
rather
it's
a
tool
in
the
toolbox,
but
we
don't
always
use
it.
Something
to
keep
in
mind
is
that
the
extract
has
its
own
set
of
filters,
so
you
can
apply
it
will.
If
you
have
a
data
source
filter,
it
will
bring
this
in,
but
you
can
add
additional
filters
and
it
will
extract
based
on
those
filters.
A
I
made
this
mistake,
where
I
had
a
data
source
filter
with
one
date
range
and
an
extract
filter
with
a
different
date
range
and
so
I
had
I
was
look.
The
data
source
filter
was
the
last
three
days,
but
I
haven't
had
an
extracted
for
like
10
days
and
so
I
was
getting
nothing
because
the
extract
was
a
certain
date
range.
A
A
These
are
valuable
tools
and
when
we
start
talking
about
performance,
we
will
use
them,
but
we
when
we
even
then
we
have
to
start
talking
about
how
much
data
are
we
really
bringing
in
I?
Don't
think
that
the
engineering
team
here
has
a
lot
of
data,
but
there
are
other
teams
that
they
regularly
query
gigabytes
of
data,
a
return
gigabytes
of
data
in
their
queries,
and
if
we
started
extracting
all
of
those,
we
would
run
out
of
space
very
quickly.
A
Extracts
was
the
last
item
on
my
list
of
things
to
discuss
so
now.
I
want
to
open
it
up
for
questions
on
what
I've
talked
about
in
any
other
general
questions
about
creating
data
sources.
D
I
guess
I'll
just
go
back
to
the
question
that
I
posted
here
I
think
I
have
a
I
I
didn't
understand
the
the
previous
comment
where
I
attacked
this
regarding
Rao
was
asking
about
the
desired
long-term
approach,
and
you
said
that
it's
best
to
stick
with
the
published
data
sources
and
how
I
understood
it
was
oh,
using
whatever
table
that's
published
in
the
Tableau
virtual
connection.
So
I
think
those
are
two
different
things
and.
A
So
your
team
is
going
to
be
responsible
for
creating
the
published
data
sources,
and
so
that's
part
of
what
this
training
is
for.
So
you
will
go
about
it.
Kind
of
as
I
walk
through
the
beginning,
connect
the
date
connect
directly
to
snowflake
and
then
publish
the
data
source.
I
didn't
actually
show
pushing
the
buttons
of
publishing
a
data
source,
but
after
you've
done
everything
that
we
showed,
maybe
we'll
just
bring
it
back
up.
D
D
A
Good
and
we'll
talk
about
virtual
Connections
in
a
moment
to
answer
that
question.
So,
once
we've
created
this
data
source,
all
you
would
have
to
come
over
here
to
do
is
publish
to
server
and
that'll.
Ask
us
to
connect.
I
won't
go
through
that
it'll.
Just
ask
you
to
connect
it'll,
give
you
a
name
and
it'll.
Ask
you
to
put
in
a
location.
Maybe
I
can
do
this
quick.
My
connection
can
be
different
and
difficult,
sometimes
because
I
have
access
to
several
things.
A
So
I'll
resolve
that
later
again,
it's
it's
similar
to
publishing
workbooks
from
their
desktop
it'll,
just
be
a
different
file.
Type
it'll!
Ask
you
for
a
location
and
about
embedding
the
credentials.
This
is
probably
why
I
had
it
on
my
notes.
When
you
publish
the
data
source,
it
is
okay
for
you
to
embed
your
credentials
for
your
team
long
term,
like
so
you're
in
development,
and
you
just
need
to
work
on
it
or
you
want
multiple
people
to
you'd
be
able
to
use
it.
You
can
embed
your
credentials
long
term.
A
That
is
a
username
and
password
that
the
data
team
will
put
in
there
and
that
will
just
it'll
work
for
everybody
so
that,
just
as
a
side
note,
there
I
forgot
that
in
my
in
my
script,
to
get
back
to
your
other
question
about,
you
saw
all
the
tables
or
the
connections
there.
The
scheme
is
there,
those
were
probably
virtual
connections
and
we're
actually
going
to
be
moving
away
from
using
those
we
wanted
to
use
them
in
a
specific
way.
They
don't
really
support
it.
A
They're
non-performant,
we'll
reevaluate
later
in
the
future,
but
even
as
like,
specifically
to
the
question
you
had
in
the
agenda,
if
a
new
column
is
added
in
a
table
in
Snowflake,
is
that
table
it?
Should
it
should
pick
up?
This
virtual
connection
should
pick
up
the
new
columns,
but
it
may
not.
It
won't
pick
up
new
tables,
it
all
has
to
do
manually
and
it's
not
as
performant.
So
we're
going
to
fully
be
we're
going
to
be
decommissioning.
A
The
virtual
connections
that
you're
seeing
and
then
relying
on
the
analyst
teams
to
build
out
published
data
sources
for
the
Explorer
tools
who
those
who
are
going
to
be
Explorers
so,
like
I,
said
that
that'll
be
your
team's
responsibility
to
build
out
those
public
data
sources
for
the
engineering
uses
or
whoever
else
you
support.
It
would
be
your
team's
responsibility
to
build
them
out
and
that's
why
we're
focusing
have
so
heavily
on
the
curation
of
the
data
sources.
Does.
D
A
We
shouldn't
be
publishing
something
that
somebody
already
has
notion.
So
that's
a
good
thought
to
have
like
wait.
A
minute
someone's
already
done
this,
but
if
you're
the
owner
of
the
data,
internal
issue,
history
or
engineering
issues
and
someone
else
has
a
published
data
source,
you
should
be
going
and
asking
them.
Why
or
you
should
be
working
together
to
collaborate
and
combine
you'll
see
this.
The
people
team
is
very
Earnest
in
their
desire
to
own
the
people
data.
A
So
if,
if
you
publish
a
employee
directory
they-
and
they
already
have
one
they're
gonna
be
like
hey,
why
don't
we
work
together
to
make
sure
you
have
what
you
need
in
our
employee
directory
they're,
going
to
try
and
be
watching
and
take
ownership
of
things
that
are
exclusively
people
data
and
you
can
do
the
same?
It's
like
hey
I
noticed
that
you're
looking
at
Mr
close
rate
for
your
team.
We
have
this
data
source
over
here.
That
has
all
that
in
it
already.
Does
it
have
everything
you
need
or
can
we
help?
A
You
add
things
that
you
need
to
it
and
that's
how
the
ownership
of
data
sources
should
go
and
right
now
it's
going
to
be
on
the
teams
to
to
look
to
ask
questions
to
know
or
have
a
data
catalog
so
that,
as
you
publish
create
a
data
source
for
production
use
so
that
you
know
multiple
can
use
it.
We
want
to
add
that
to
the
data
catalog,
it's
a
handbook
page.
A
So
if
you're
looking
for
a
data
source,
you
should
go
to
the
handbook
page
and
say
what
data
sources
are
there
and
what
do
they
have?
I,
don't
see
anything
like
what
I
think
I
need.
I'm
gonna
go
build.
My
own
I'm
gonna
go
build
a
new
one
for
my
team
or
oh
I,
see
that
people
already
has
a
head
count,
something
that
should
have
headcount
in
it.
I'm
gonna
go
use
that,
and
rather
than
trying
to
create
your
own
from
the
data
that
you
might
have
access
to.
D
Yeah
that
makes
sense
now
that
actually
now
I
have
a
second
question
related.
C
D
C
D
D
D
A
In
here
properly,
General
answer
is
yes,
let
me
let
me
try
and
show
you
what
that
looks
like
it's
a
little
more,
it's
a
little
different.
A
Training,
it's
telling
me
that
table
is
not
available.
Can
you
get
to
tableau.
A
A
A
There's
also
a
section
in
Tableau
called
external
assets,
and
so
you
can
go
and
find
it
should
be
taking
basically
a
note
of
the
table
that
we're
pulling
in
from
Snowflake-
and
you
say
hey:
where
is
this
table
being
used
and
similarly
traced
its
lineage
to
the
work
works
that
are
being
used?
We
are
working
for
that's
all
through
the
use
UI
in
Tableau
we
are.
Do
we
have
a
plan
to
develop
additional
tools?
A
Maybe
a
report
that'll
make
it
easier
to
go
and
find
those
things
in
the
future,
but
right
now
it's
really
for
the
creators
to
say
go
into
the
UI,
say:
okay,
where's,
this
I
I,
this
issue,
enhanced
engineering
issues,
where's
it
being
used.
Okay,
I
see
it's
already
in
three
different
data
sources:
let's
go,
let
me
collaborate
with
them
and
talk
to
them
about
enhancing
them
or
consolidating
them
into
one.
C
A
It
is
it's
the
same
error,
so
hopefully
we
get
that
resolved
soon,
because
I
have
a
lot
of
work
to
do
in
top
load
today.
Are
there
any
other
questions
for
coming
up
at
time.
B
I
have
one
final
one,
so
just
to
be
clear
when
we
start
publishing
these
data
sources.
Are
we
going
to
be
publishing
the
data
sources
to
where
those
data
sources
are
now
the
resources,
I
guess
folder?
Is
there
a
specific
I
guess
again
I'm
my
big
thing
right
now
is
just
thinking
about
organization.
C
B
B
Just
thinking
because
yeah
engineering,
we
end
up
dealing
with
a
lot
of
different
tables
and
teams,
don't
really
care
about
it.
Sometimes
yeah.
A
So
the
resources
project
is
designed
for
those
workbooks
and
or
data
sources
that
the
Enterprise
Central
data
team
is
going
to
own
and
manage
so.
If
we're,
we
might
go
through
and
say,
take
all
of
our
tables
at
our
Mark
tables
and
create
a
publish
data
source
for
them
and
we'll
put
them
into
the
resources
folder
if
you're
building
something
for
the
explorers
that
are,
you
know
going
to
be
signed
up
for
engineering,
it's
okay
for
you
to
publish
your
data
source
in
your
section
in
your
project
in
the
engineering
projects.
A
Again,
you
can
move
from
Dev
to
ad
hoc
to
production,
just
like
the
workbooks,
and
so
you
can
house
that,
with
the
workbooks
we're
moved
away
from
having
different
folders
for
the
different
types
of
things.
Like
data
sources
versus
workbooks,
there's
icons,
you
can
filter
and
sort
by
those.
So
there's
just
wherever
you
would
put
workbooks,
you
can
put
the
data
sources
and,
yes,
we
can
talk
about.
We
I
know
we
Raul,
you
and
I
have
talked
about
a
more
granular
project
structure
and
how
that
how
that
can
fit
in
and
yes,
you
can.
A
We
just
need
to
work
with
trying
and
the
bi
team
to
build
those
in,
and
so
we
can
make
sure
they're
permissions
properly
in
the
dev
space.
You
can
create
your
own,
we're
not
controlling
those.
So
if
you
want
to
trial
it
out
and
try
and
figure
it
out,
move
things
around
build
projects
to
close
projects
in
the
dev
space.
You
should
be
able
to
do
that,
but,
as
you
move
into
the
ad
hoc
in
a
production
space
that
will
require
the
bi
team's
support
to
build
those.