►
From YouTube: 2021-08-31 KT quick recap on usage_ping issues.
Description
Chris, Mathieu and Radovan discuss usage_ping details as a part of knowledge transfer.
A
Oh
yeah,
I
got
it.
Okay
recording
started
matthew.
I
went
through
a
couple
of
videos
or
one
video
related,
our
pre
previous
knowledge
transfer
sessions,
and
also,
I
will
have
couple
of
questions
also
we'll
use
this
document
here
to
put
our
insights
and
findings
also,
if
you
feel
free
to
put
anything
there,
if
you
forgot,
or
something
and
also
I
plan
to
share
my
screen.
Just
give
me
a
second
to
this
screen
this
one.
A
Probably
you
can
see
it
right,
yeah,
okay,
just
related
to
this
change
in
such
usage
being
gitlab
underscore
dot
com
renaming
the
tables.
I
add
a
couple
of
more
things
there
previously
about
ready,
snowflake.
We
are
done.
We
had
finished
that
and
we
went
together
through
a
pair
programming
session.
Let's
say
something
like
that
and
when
it
comes
to
this
issue,
I
will
have
a
couple
of
questions.
Probably
you
remember
you
put
some
details
here
which
is
kind
of
okay
and
yeah.
It's
a
reminder
for
two
of
us
just
to
discuss.
A
You
said
rename
these
tables
two
instance
equal
metrics
in
row
of
sas
usage,
pink,
actually
schema
will
be
the
same.
The
stable
name
will
be
different
right
and
this
is
clear
and
everything's
changed
when
it
comes
to
renaming.
I
should
rename
it
directly
in
snowflake
and
next
job
once
when
it's
live
on
production,
we'll
put
the
data
there
into
the
renamed
table
right,
yeah,
okay,
also
change,
dbt
model
documentation
accordingly,
to
line
with
new
change.
This
is
also
done,
and
you
mentioned
this.
Add
the
id
in
a
table
run
id
column
added.
A
B
B
A
A
B
In
the
dbt
block,
okay,
it's
in
a
workspace
right
now,
so
that
means
that
nothing
got
mold.
I
just
I
just
exposed
it
for
exploration.
That
means
that
the
the
data
isn't
properly
created.
A
A
I
agree
with
you
just
want
to
double
check
yeah
and
you
sh,
you
mentioned
here.
Add
one
column
counter
sql
count,
but
I
think
someone
put
a
comment
there.
I
think
it's
michael
walker
he's
not
in
company
anymore,
and
you
said
yes,
true,
that
can
be
removed
and
probably
this
comment
is
related
to
that
column
right.
So
we
don't.
We
don't
need
this
counter
sql
count
or
do
we
have
any
other.
A
Okay,
do
we
need
to
propagate
your
color
way
a
bit
more
in
the
final?
This
question
is
the
same
so
yeah.
We
will
discard
the
change
nothing
to
push
forward,
and
yes,
I
should
say
this:
you
mentioned-
create
a
new
table
related
to
sql
errors
and,
as
I
remember
we
mentioned
this
last
time-
and
this
is
the
title
ready
to
exchange-
and
you
said
we
have
included
results
gitlab.com
table.
I
think
there
could
be
split
into
different
models
that
we
contain
all
the
errors
in
json
and
what
we
agreed.
A
B
So
at
the
moment,
if
you,
if
you
go
to
snowflake
and
check
this,
this
table,
you'll
see
that
it's
pretty
easy
to
see
that
either
you
have.
A
B
A
B
A
B
A
I
think
I
got
the
point
because
my
initial
idea,
of
course,
I'm
not
aware
in
all
details,
so
you
can
help
me
here.
We
have
a
try
block
then
accept
with
sql,
alchemy
error
or
key
error,
see
yeah
empty
data
frame
so.
B
A
B
B
A
Reveal
is
the
same
like
successful
execution.
Just
simply,
we
have
zero
records,
but
yeah.
B
A
B
So
the
the
reason
for
this
was
just
to
trouble
troubleshoot
when
there
are
errors
when
you
look
at
when
you
look
at
so
you
write
data,
know
that
I
return
and
should
go
to
the
success
into
the
success
with
the
counter
name
and
the
value.
The
key
will
be
the
counter
name
and
the
value
will
be
zero.
A
C
B
It
after
your
questions,
okay,.
A
B
No,
it's
fine!
For
now
it's
just
it's
too
defined
to
not
do
it
this
way,
but
in
the
future
I
think
it
will
be
better.
Okay,
okay,.
C
C
B
A
Yeah,
this
is
okay
join
here,
find
join.
B
Yeah,
okay
cool!
So
you
see
here
and
that's
for
and
and
and
select
the
the
the
counter
name
count:
the
cluster
platform
eks
this
one
yeah
copy
copy
and.
A
A
B
B
C
B
So
I
started
looking
at
the
data
yesterday,
so
normally
what
we
should
have
in
our
house
should
be
exactly
the
same
as
what
we
have
in
production.
That
means
that
when
we
count
the
same
things,
we
should
get
the
same
results.
It's
not
the
case
and
it's
mainly
not
the
case
because
we
don't
catch
deletes.
A
B
B
A
B
And
doesn't
really
need
to
be
discussed,
because
that
clearly
means
also
that
we
have
a
slide
program
with
this
project,
and
that's
something
I
want
to
mention
like
with
rob.
Is
that,
with
the
current
setup,
our
data,
our
house,
doesn't
reflect
what
we
have
exactly
in
the
in
in
input
database?
Okay,
so
either
we
need
to
find
a
way
to
flag
the
plant
electricity
growth.
A
B
That's
very
hard
I
know
or
we
need
to,
but
for
me
I
think
the
best
would
be
to
do
an
edit
of
all
these
queries
and
understand
which
ones
are
needed
and
which
ones
are
not
because
a
lot
of
them,
I'm
sure
they
are
barely
used
so
and
they
are
very
very.
They
are
like
very,
very,
very
queried
that
have
been
implemented
three
to
four
years
ago
and
that
no
one
cares
about.
A
B
B
B
The
other
option
should
be
to
say,
like
what
queries
do
we
need
and
let's
see
if
we
see
a
big
deviation
on
it
so
and
for
this
it
would
be
understanding
where
these
queries,
where
these
numbers
are
needed
in
which
in
which
reporting,
so
on
so
forth.
So
there's
one
main,
which
is
just
for
all
the
product
pis
for
the
rest,
I
don't
think
most
of
it
is
even
looked
at,
but
that
should
be
definitely
investigated.
B
B
That
that's
really
something
that
bothers
me
a
bit,
because
I
don't.
I
don't
find
a
good
way
to
do
it,
but
to
just
to
just
solve
this
problem
right
now,
and
I
don't
have
the
time.
A
B
A
Clear
for
now,
and
also
here
some
reminders
for
me
it's
mainly
to
discuss
among
any
team
members
but
yeah.
I
want
to
check
which
emily
to
create
for
testing
I
sold
the
does
this
with
that
tester
flow
dbt.
Do
we
need
some
additional
test
case
for
new
columns
and
propagate
new
columns?
You
needed.
I
will
check
with
you.
Yes,
we
should
propagate
this
and
also
need
to
be
included
in
dbt
jobs.
So
that's
one
topic
from
my
side
and
also
I
want
to
use
some
of
your
time
do.
Do
you
think?
A
B
C
A
C
B
Totally
so
I
agree
with
you
that
so
that's
so
that's
a
that's
a
feature
so
did
you
did
you
meet?
Did
you
meet
with
ian
before?
B
No,
I
don't
think
so.
Okay
go
cool
so
baby
to
the
guy
from
from
in
spy
cart,
yeah,
yeah
yeah,
so
the
dd
means
the
decisive
data.
So
that
means
there
were
an
agency
that
worked
with
us
for
a
couple
of
months
almost
a
year.
I
guess
to
help
us
kick
start
the
edm,
the
enterprise
dimensional
migration.
B
Finally,
so
basically
one
thing
he
knows
is
that
a
build
can
stack
on
a
date
on
a
day
and
finish
on
another
day
yeah,
and
what
he
wanted
to
do
is
just
to
create,
like
a
perfect
reporting
to
say
like
what's
happening,
is
that
when
so
assuming
a
job
starts
at
10
pm
and
ends
up
at
2,
am
two
hours
will
be
reported
on
2am
on
the
next
day,
two
hours
will
be
returning
from
10
pm
to
midnight
to
the
day
when
it
started,
and
then
two
hours
on
the
last
and
next
day,
yeah.
B
That's
why
I
created
this
so
to
have
a
perfect,
perfect
reporting.
That
being
said,
when
you
talk
to
davis-
and
I
think,
he's
gonna
reply,
it's
gonna
tell
you
the
purpose
of
this
whole.
B
B
B
C
C
B
So
just
afraid
this
this,
this
mod
is
not
used
at
all
at
the
moment,.
B
B
You
can
be
as
skilled
as
you
want
like
you,
you
won't
break
anything
and
I
think
no
one
will
realize,
especially
especially
knowing
that
davis
agrees
on
the
fact
that
we
should
we
shouldn't
complexify
it
so
much,
and
we
shouldn't
care
about
this
multi-day
multi-day,
much
more
serious
and
just
care
about
having
a
basic
reporting
that
we
help
them.
Like
query
query
this
eq
that
we
sorry
having
a
basic
mark
that
will
help
them
create
their
reporting
much
easier
than
what
they
do
now.
A
Thank
you
matthew.
I
think
that
that's
it
from
my
side
really.
I
was
primarily
focused
on
this
issue.
I'm
almost
done
with
that
and
just
want
to
do
find
unique
with
you
and
everything
seems
to
be
clear
for
me.
Thanks
for
that,
and
also
chris,
if
no
any
questions,
we
can
close
the
call.
If
it's
up
to
me,
I'm
done
right.