►
Description
Sushma, Chris, and Radovan discussed a solution for Instance Redis metrics fix.
A
B
B
A
What
do
you
mean
temporary
table?
Let
me
share
my
screen.
B
A
Okay,
let's
start
from
the
beginning,
so
actually
what
I
have
here
is
the
what
I
found
is
redis
metrics
source
table
here,
the
first
one
and
it's
in
raw
schema.
I
will
use
this
from
this
issue.
Let
me
check
raw
and
instance,
threads
matrix.
Actually,
this
is
the
raw
data
we
load
from
api.
A
So
this
is
the
first
step.
What
we
have
this
is
what
I
found
actually
and
initially
it
was
only
json
data
type.
Only
one
column
in
this
table.
I
enriched
this
with
pink
date,
run
id
and
uploaded
that,
just
for
our
sake
to
to,
we
can
track
back
if
something
is
wrong.
Next
step
after
that
is
to
move
to
prep
schema.
This
is
also
what
I
found
right.
That
is
what
I
mean.
C
C
B
A
And
the
last
step
is
prod
legacy.
Wk
starts
with
against
the
stratus
matrix,
like
data
has
been
calculated
here
and
what
you
notice
in
this
table.
Initial
motivation
for
this
was
this
was
duplicates
in
case.
You
have
more
than
one
load
per
day
and
I
overcome
and
fix
that,
because
I
use
a
qualifier
of
row
count
where
it's
only
one.
So
I
will
pick
up
the
latest
one,
which
is
okay
and
no
more
duplicates
here.
A
So
actually
I
have
three
steps
so
row
prep
and
prod-
and
actually
I
didn't
invent
anything
here-
just
fix
what
we
had
before
like
these
three
steps.
First,
second
one
and
the
last
one
when
you
say
test
table
what
do
you
mean
exactly.
B
Yeah,
so
here
in
the
first
step,
it
looks
like
you're
running
a
one-time,
sql
step
right,
so
I
wasn't
sure
why
we
need
to
you
know
kind
of
get
that
backup
of
the
data
because
anyways
it's
a
full.
The
data
gets
refreshed,
even
if
it's
an
incremental
model
on
weekends.
So
I
don't
I
didn't
know
if
you
had
to
release.
A
This
this
template,
but
yeah
you
stick
with
this,
create
my
typo
here,
create
I
created
10
table
just
to
put
all
data
into
the
new
structure
and
make
sure
everything
will
pass
fine
after
release
the
code.
So
this
is
not
a
big
deal,
I
would
say
just
the
way.
I
close
something
and
also
I
know
we
do
not
store
one
time
sequels
in
our
repo
just
to
know,
collect.
A
After
you
review
this
and
say,
okay,
I'm
happy,
I
will
release
this
part,
we'll
delete
this
file
and
it
will
be
on
prod,
live
and
see.
This
is
just
a
let's
say
one
step
extra
step
to
have
a
backup,
and
then
I
will
create
new
table,
insert
the
data
from
old
table
and
just
keep
a
backup,
and
if
everything
is
fine,
where
are
you?
I
will
drop
it
at
the
end.
So
during
this
juggling
with
data,
I
will
just
want
to
have
backup
nothing
more
than
that.
A
And
also
exactly
and
also
inside
this
one
time
file
once
when
everything
is
done,
ready,
prod
and
data
have
been
loaded.
I
have
test
cases
for
me
just
to
prove
everything
is
fine.
I
have
three
test
cases
for
each
stage,
so
you
can
recognize
for
each
of
this
stage.
I
will
have
one
test
case:
okay
in
dbt,
I
will
run
separately
dbt
test
model
whatever,
but
this
is
just
for
my
let's
say
double
checking
during
my
development.
A
A
Because
sorry,
in
the
previous
issue,
I
I
spoke
with
red
and
he
suggest
me
to
do
it
in
that
way,
because
actually
we
don't
have
any,
let's
say,
implicit
command
to
execute
any
kind
of
sequel.
Like
ddl
statements,
everything
is
usually
automatically,
but
here
I
extend
the
current
structure
change.
There
is
also
one
more
reason.
I
change
the
data
type
in
initial
in
raw
data.
Why?
A
Red
is
up
or
snowflake
stage
load
copy
remove,
and
this
means,
if
you
take
a
look
into
the
source
of
this
function,
you
will
just
simply
upload
one
json
into
the
table
with
one
column,
but
here
we
want
to
have
more
than
one
column,
so
I
switch
to
data
frame
uploader
instead
of
snowflake
stage
load
copy,
remove,
and
you
see
it's
data
frame
uploaded
when
it
comes
to
that.
There
is
also
one
restriction
with
data
frame
uploader.
A
B
A
D
A
Because
end
of
the
is
okay,
I
want
to
avoid
duplicates,
but
during
my
investigation
I
see
a
couple
of
problems,
so
I
want
to
sort
it
out.
Everything
just
to
be
sure
will
not
get
back
to
this
issue
again
and
again
right.
You
know
what
I
mean,
and
for
that
reason
I
started
here
optimize
this.
This
is
fine,
tested
this
scenario.
Also
fine,
then
I
will
stick
with
this
model,
so
this
is
how
I
exclude
duplicates
here
and
also
I
meant
couple
of
more
columns
and
the
main
question
what
you
what
I
asked
you.
A
B
Yeah
the
first
thing:
I
think
it
should
just
be
the
time
stamp
just
to
match
whatever
we
have
in
the
store.
Yes,.
A
Yes,
true,
I
checked
the
source
table,
you
put
it
here,
so
let's
take
a
look
together.
This
is
our
table.
Currently,
after
my
changes,
I
want
to
do
description.
Chris,
probably
this
is
the
most
important
thing
for
you,
because
from
here
probably
will
do
some
transformation
and
ping
date
is
time
stamp
and
is
denied
and
in
original
table
pink
date
is
also
timestamp
entity.
Nine
and
when
it
comes
to
my
table
again
recorded
at
is
date.
That
is
you
what
you
asked
a
request
right
sushma,
if
I'm
not
wrong.
B
B
Time
we
can
complete
more
than
one
thing
a
single
day.
B
A
A
B
B
A
B
A
I
think
this
is
the
best
best
time
to
start
right,
so
in
that
case
I
mean
you
know,
I
followed
the
rules
as
it
was
implemented
before,
but
under
this
rule
there
is
no
request
to
find
any
description
on
column
level.
Only
what
you
should
find
is
description,
pair
table
level
or
resource
level,
but
okay
fully
agree
with
you
so
to
put
description
here
right.
A
B
A
Yeah,
it
will
be
there
so
make
this
complete
anyway.
So,
okay,
I
got
the
point.
So
we'll
add
this
anything
else
you
want
to
to
highlight
here.
B
A
Oh
yeah,
I
just
wanted
to
move
the
data
because
you
know
what
the
problem
is.
You
can't
use
alter
table
from
a
variant
to
long
bar
chart.
It
will
be
a
problem.
So
for
that
reason
I
put
the
data
into
one
table:
move
data
back
as
a
different
data
type
voucher
instead
of
variant,
and
everything
is
fine.
So
when
I
test
everything
without
use
this
sql
statement
for
ddl
and
create
temp
table,
it
failed
because
you
can't
do
that
simply
there
is
a
friction
in
snowflake.
B
B
A
A
16
million
something-
and
this
will
cause
an
aerobic-
why?
Because
you
can't
use
altered,
table,
modify
cone
from
variant
to
varchar
something,
but
you
can
you
switch
it
from
varchar
to
number
from
number
today
any
combinations,
but
this
is
specifically
not
a
load
in
snowflake
and
you
will
get
an
error.
You
can
check
it
and
test
it,
but
this
is
applicable
only
for
from
variant
to
our
chart.
A
A
Maybe
the
most
efficient
way
is
to
split
this
in
couple
of
very,
very
small
issues,
one
by
one
but
okay
anyway,
it's
not
too
big.
I
think
it
can
be
digested
and
done
under
one
review
and
one
release.
That's
that's
my
opinion.
I
think
not
complicated,
but
yeah.
I
think
you,
you
have
the
full
context
now
right.
B
Yes,
so
I
think
this
definitely
makes
sense
to
me.
So
let's
get
get
this
push
tomorrow
today
tomorrow
and
then
maybe
we
can
kind
of
validate
the
data
once
it's
deployed
to
production,
yeah.
A
My
my
initial
plan
is,
I
need
to
make
this
beautiful
and
our
descriptions
and
everything
run
one
more
test,
trying
pipelines
and
tomorrow
once
when
you're
on
the
in
the
office
in
the
morning,
you
will
see
everything
ready,
so
you
can
do
a
review.
For
this
I
mean
you,
don't
have
privileges,
also
too
yeah
to
merge,
but
you
can
review
so
that's
it.
B
Yeah,
I
cannot,
I
cannot
merge,
I
don't
have
the
right
to
merge,
so
we'll
have
to
assign
it
to
paul.
That's
why
I
added
paul
as
the
reviewer
okay.
C
E
Yeah
is
there
any
what
what
are
there?
Are
there
any
performance
impacts
for
switching
from
snowflake
stage
load
copy,
remove
to
data
frame
uploader?
I
was
just
looking
at
the
code
behind
that
already.
A
Paul
told
to
me
it's
a
little
bit
slower,
but
actually
what
we
receive
here
is
not
too
big
json
file
we
can
guarantee,
it
will
not
be
bigger
than
it
is
because
you
have
a
definite
set
of
metrics
one
thousand
meters
inside
one
json
file.
If
it
let's
say
we
need
to
upload
something
bigger
without
known
size
and
volume
of
the
data
probably
will
not
use
this
approach.
Definitely
yeah,
because.
A
As
you
like,
okay,
what's
happened,
but
this
is
a
really
definite
well-known
structure.
Json
with
5
000
things
inside
literally
will
not
be
bigger
than
a
couple
of
megabytes,
you're,
not
drilled.
In
case
you
need
a
previous
issue.
I
I
dealt
with
some
something
much
much
bigger
and
it
was
a
problem
really
yeah.
It
will
be
never
ending
and
also.
A
Yeah,
this
is
small
and
it
will
be
small,
probably
will
grow
rapidly
and
anyway
I
have
a
long
term
fixing
solution.
In
case
we
load
something
bigger
than
16
megabytes
in
one
json.
So
probably
it
will
be
kind
of
permanent
solution,
but
it
will
come
very
in
couple
next
months,
so
yeah
I
was,
I
shared
your
concern,
but
we'll
be
fine
for
now
we'll
see:
okay,
cool
yeah.
B
No,
I
think
I
got
all
my
doubts
clear.
Thank
you
so
much
evan
thanks
for
your
work.
This
is
awesome.
So
thank.