►
From YouTube: Data Platform Team Meeting | May 2, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
to
the
data
platform
meeting.
First,
we're
gonna
start
with
last
seven
days
on
triage
and
I.
Think
Justin
Wong
has
the
first
item.
B
Thanks,
let's
see
okay
so
visible
the
error
this
there
has
been
happening.
A
lot
it
just
says:
timestamp
is
not
recognized.
I
was
just
digging
around
with
this
and
there's
an
issue
with
the
table.
I
think
I
was
I
was
able
to
make
a
connection
directly
with
this
table
in
I.
Guess
a
visible
snowflake
is
my
assumption
here.
B
It's
not
our
snowflake
and
I
see
this
error
very
often
when
you're
querying
a
view,
but
the
view
has
some
faulty
logic
and
in
this
case
it's
trying
to
cast
a
varchar
column
into
a
timestamp,
but
it
can't
because
it's
a
it's
not
really
a
timestamp
value.
It's
a
comma
value
and
this
error
gets
thrown
I
think
we
just
need
to
contact
visible
and
see
if
they
can
fix
the
underlying
data.
So
what's
the
contact.
C
A
question
we
have
a
Google
sheet
right
with
all
of
the
contacts
that
we
need
to
contact
per
Source,
but
I
am
afraid
it
is
not
very
specific.
So
sometimes
it's
just
the
website
there
we
need
to
search
I
will
try
to
find
it
and
see
who
is
there
listed
for
Visible?
It's
also
a
bit
sad
that
this
contact
list
is
hidden
somewhere
in
an
issue.
Template
I
cannot
always
easily
find
it.
A
Yeah,
that's
also
kind
of
a
weird
place
to
put
it
too,
but
I
get
it.
We
could
also.
You
could
also
look
at
the
techstack
yaml
and
the
handbook
or
the
website
project
and
see
who's
the
technical
owner
for
visible,
and
that
would
be
a
that
would
be
a
gitlab
employee.
We
probably
should
Loop
them
in
regardless,
and
so
that
would
be
someone
who
could
also
connect
us
if
we
can't
find
the
contact
list.
C
That's
a
good
question:
maybe
the
data
triage
issue
when
we
have
a
freshness
problem,
I
think
but
I'm
not
entirely
sure
I
need
to
figure
it
out.
I
need
to
find
out.
So
let's
see
data
triage
is.
D
A
C
A
C
A
Yeah,
the
other
problem
with
having
it
out
here
like
this,
is
that
we
have
to
keep
it
updated
and
so
yeah
and
I
mean
there's.
Maybe
that's
just
going
to
be
the
case
regardless,
but
the
nice
thing
about
tech
stack
yaml
is
that
it's
maintained,
updated
yeah
by
the
you
know
by
everybody.
So
it's
sort
of
though
it
might
not
have
somebody's
contact
information.
It
at
least
has
like
a
step
to
get
theirs.
Maybe
there's
a
better
solution
here,
but.
B
A
Maybe
they
might
have
some
information
it
might
make
sense
to
Ping
to
if
they
have
an
open
issue
to
tag
them.
I
guess
we'd
open
up
an
issue
to
tag
them
and
the
technical
owner
as
well
and
just
see
if
you
can
get
a
lead.
Okay,.
B
Stay
on
since
you're
on
triage.
Do
you
think
you
can
do
the
tagging
cool,
okay,
moving
on
then
to
Ops
TV
perms
issue,
permission
denied
for
all
tables,
so
Ved
had
originally
said
that
I
think
there
was
like
two
issues
here:
one
of
the
issues
that,
depending
on
which
Runner
runs
it
there
might
be
a
error
on
that
end,
but
vet
already
added
the
runner
into
the
allow
list
for
the
eyepiece.
B
B
A
C
C
B
C
D
I
think
there
was
a
change
related
to
the
Ops
3.
We
have
linked
slack
message
there
would
it
be
related
to
that
and
just
I
totally
don't
understand
what
if
they
are
directly
making
but
will
related
to
that.
B
Okay,
next
item
is
also
me:
Ved
had
caught
this
issue
with
the
main
postgres
pipeline,
the
last
replication
timestamp
was
frozen
and
it
turned
it
just.
It
means
that
this
snapshot
wasn't
actually
being
replicated.
B
It
turned
out
that
the
infra
team
had
moved
the
snapshot
like
I,
think
they
saved
them
somewhere
some
sort
of
path,
and
they
had
moved
that
path
and
they
had
never
adjusted
it,
and
so
once
they
adjusted
it,
it
ended
up
getting
fixed,
I.
Think
the
next
one
is
Houston.
D
Yeah
recently,
we
have
been
seeing
a
lot
of
the
incident
count
for
possible
pipeline
increasing
a
lot.
So
should
we
do
something
about
it?
If
it
was,
we
have.
We
have
started
documenting
this
in
an
issue
for
the
I
think.
So
in
the
past
two
weeks
we
have
had
five
or
six
times
the
failure
has
occurred.
A
Is
all
have
seemed
semi-unrelated?
It
could
just
mean
that
the
infra
team
is
doing
a
bunch
of
changes
adjacent
or
around
the
this
infrastructure
recently
and
so
we're
more
affected
than
maybe
in
the
past,
but
I
I,
guess
I'm,
saying
I
can't
imagine
what
we
could
do
other
than
maybe
improve
our
Incident
Management
process
on
our
side
or
have
some
sort
of
communication
with
infrastructure
to
see
that
but
I'm,
not
mad
I
can't
see
how
there's
initial
relationship
to
R,
tooling.
D
I
do
I
mean
like
right
now,
I
think
for
the
effect
timings
like
in
the
morning.
You
2
30
a.m.
Utc
I
can
manage,
but
on
the
days
when
I
am
not
there
next
time
like
the
by
the
time,
European
guys
join.
There
are
a
lot
of
spam,
like
the.
A
D
Enough,
no,
no,
no
I
did
not
mean
I
did
not
mean
I
meant
like
when
I'm
on
leave
so
like,
for
example,
I
have
a
leave-in,
mid-june
I'll
be
gone
for
two
weeks.
So
that's
when,
if
the
triage
like
this,
if
the
github.com,
then
what
shall
we
do
with
them.
C
Yeah,
what
he
means
is
that
on
that
time,
when
he
is
usually
available-
and
none
of
us
is
not
even
in
Europe-
not
even
in
the
US,
what
should
we
do,
because
this
is
happening
very
often
lately,
and
should
we
figure
out
a
way
with
the
infra
team?
To
have
this
not
happen
again
or
is
it
not
this
awful
yeah.
A
I
mean
I
think
that
there's
two
parts
to
that
right,
like
there's
the
not
happen
as
often
and
that's
like
the
infra
team
right,
and
we
can
definitely
ask
them
that
and
say
heads
up
our
main
guy
who
can
catch.
This
early
is
going
to
be
gone,
but
then
the
other
part
is
like
we
don't
have.
We
are
not
set
up
currently
for
on-call
and
that's
I,
think
on
purpose,
and
so
we've
had
no
like
it's
not
like
it's
not
hunch
before
you
joined.
A
In
fact,
there
was
a
time
when
we
only
had
American
Engineers
right
and
the
schedules
are
pretty
similar
to
what
they
are
now
we
would
just
find
out
later,
and
so
that's
what
would
happen
and
I
think
you
know.
If
we
don't
want
that
to
happen,
then
we
have
to
change
something
about
our
operation.
But
it's
not
your
job.
C
Okay,
you
know
we,
we
don't
have
this
as
a
goal,
so
this
was
never.
There
was
never
such
a
level
of
urgency,
so
it's
totally
fine.
If
we
miss
it
and
the
three
Azure
that
has
even
if
it's
Justin
in
his
time
zone,
then
we
should
be
still
fine.
C
I
mean
it's
amazing
that
you
are
there
all
the
time
like
a
cat's
all
you
know
you
never
let
it
fail,
but
I
also
honestly
believe
that
it
should
be
fine
if
it
fails
and
nobody's
there
to
catch
it
because
sure
we
might
meet
an
SLA,
but
it
was
never
written
that
we
should
not
miss
a
single
one.
Ever
that's.
A
Right
and
I
think
it's
worth
us
and
I
appreciate
the
comment,
because
I
think
it's
worth
us
noting
the
risk
right,
because
we've
gotten
used
to
having
this
distribution,
but
we
are
overly
reliant
on
on
snehanch
on
a
single
engineer
to
make
sure
that
we
have
this
sort
of
response
Cadence,
but
we
technically
aren't
Opera
like
technically,
that's
not
part
of
our
operation.
A
If
that
makes
sense
like
we
don't
we,
if
we
really
cared
about
that
we'd
either
have
some
sort
of
on-call
function
or
we'd
have
redundancy,
and
you
know
that
was
for
other
eastern
time
zones
and
we
have
neither
of
those
things.
B
I
think
one
thing
that
just
I
don't
understand
that
maybe
responding
ASAP
is
not
like
the
biggest
deal
in
the
world,
but
if
there
has
been
a
lot
of
errors,
it
might
be
worth
like
digging
into
like,
like
I
guess,
from
like
a
high
level,
why
those
errors
are
occurring
and
if
we
should
ask
him
for
a
hey
like
like
this
is
happening,
is
there
any
way
you
guys
can
maybe
be
a
little
bit
more
mindful
and
preventive
of
these
issues?
I
think
that
is
something
we
could
proactively
do,
but.
A
C
A
Sorry,
real
quick
I'm,
just
taking
some
notes
on
that
conversation,
I
think
what
I'll
do
is
I'll
note
this
and
then
I'll
ping,
Dennis
or
at
least
mention
Dennis
in
here,
and
we
can
ping
him
as
well
but
say,
like
you
know,
maybe
at
least
it's
worth
a
conversation
because
I
think
that
would
be
you
know.
Obviously
we
can
all
help
with,
but
he
would
be
the
best
position
to
to
ignite
that
conversation
about
if
there's
an.
B
C
B
Cool
cool,
okay-
next
one,
this
got
sparked
because
Stark
had
sent
me
over
an
MR
reviewing
kind
of
it
goes
back
to
this
previous
conversation
we
had
about
making
the
data
engineering
will
read
only
to
some
extent,
at
least
by
default,
so
that
we
can't
overwrite
anything
in
production
and
I.
Think
the
Mr
that
you
had
originally
sent
to
me
Stark
was
adding
an
additional
role.
The
data
engineering
read
only
role,
but
it's
only.
B
It
only
has
access
to
the
production
tables
and
in
that
sense
it's
not
necessarily
useful
on
the
day-to-day,
because
you
need
a
rule
to
even
have
to
access
the
CI
databases
first,
for
example,
so
I
was
thinking
that,
instead
of
doing
it
like
that,
we
should
take
the
existing
data
engineering
role
as
is
and
update
it
to
be
read
only
for
production
and
then
add
a
role
on
top
of
that.
That
is,
data,
engineering
right
and
that's
a
non-default
role
that
we
can
use
only
when
we
actually
need
a
right
to
production.
A
I
think
that
might
end
up
being
the
same
thing,
though.
Actually
let
me
go
into
my
comment,
because
we
can
change
this
in
a
couple
ways
right.
We
can
add
an
additional
role
that
represents
the
right
permissions
and
then
change
the
current
role
to
a
read-only
role,
and
then
we
also
need
to
make
sure
that
places
like
snowflake
sorry
manage
snowflake
dot,
Pi
get
updated
to
reference,
the
correct
role
so
that,
when
you
know
objects
or
other
resources
get
created,
that
the
appropriate
role
has
permissions.
B
Yeah
I
guess,
like
the
part
I'm
saying,
is
like,
as
is
the
the
data
engineering
read-only
role,
doesn't
have
access
to
a
lot
of
the
databases
it
needs
to
like
in
Ideal
World.
It
would
still
be
able
to
write
to
a
CI
database.
It
just
could
only
read
from
a
production.
A
I
mean
I
I,
see
what
you
mean
in
terms
of
like
what
we're
trying
to
preserve,
like
the
particular
risks,
we're
trying
to
avoid
thinking
back
to
the
use
case,
though
I
think
I'm,
usually
in
either
a
development
mode
or
I'm
in
like
a
analyst
mode,
if
that
makes
sense
and
I'd
be
okay,
just
kind
of
splitting
them
all
together.
The
other
thing
is
the
only
real
use
case
that
I've
ever
encountered,
where
I'm
needing
to
make
like
manual
changes
to
a
CI
database
is
usually
in
dropping
them
and
occasionally
with
permissioning,
which
that.
A
Helped
with
dropping
them
so
yeah
agree.
A
Yeah
and
that
and
that's
probably
fine
I
guess
the
question
at
hand
is
what
is
the
particular
structure
and,
in
my
mind,
in
the
end,
there's
probably
a
lot
of
ways.
We
could
do
this,
but
there
are
some
necessary
requirements.
One
is
that
we
can't
let
the
right
only
role
inherit
any
sorry.
We
can't
let
the
read-only
role
inherit
any
right
permissions
right.
A
The
other
one
is
I'd
like
to
minimize
disruption,
both
in
terms
of
what
we're
touching
and
also
how
many
things
we
have
to
update.
So
that's
not
a
heart
as
hard
of
a
requirement,
but
it
is
something
that
just
in
terms
of
iteration
and
boring
solution,
I
think
would
be
good.
So
the
idea
here
is
I
need
to
add
something
to
the
current
hierarchy
at
a
higher
level,
all
right,
no
sorry
at
a
lower
level
than
the
current
engineering
role,
because
it
has
to
not
inherit
all
of
those
right
Provisions.
A
It
might
be
helpful
to
discuss
like
the
particular
use
case,
like
any
really
specific
about
use
case
or
even
just
like
posting,
a
merge
request
or
a
comment
with
like
a
structure
that
you
think
would
be
the
right.
The
the
right
hierarchy,
because
at
this
point
it
just
seems
really
abstract
and
it's
difficult
to
talk
about.
B
Yeah
I
can
do
that
I
mean
from
like
a
high
level.
It
simply
is
the
exact
permissions
minus
right
permissions
to
production,
but
I
did
look
at
the
yaml
file
and
it's
a
little
bit
harder
said
than
done,
because
currently
it
does
inherit
a
lot
from
like
I,
don't
like
and
I
and
I.
Think
that's
why
you
went
with
this
like
this
original
solution,
because
it's
easy
in
it,
but
I
was
just
thinking
about
it.
I
don't
think
I
would
use
that
rule
at
all,
because
I
would
have
to
be.
B
So
I
could
post
that
on
the
issue,
but
I
think
like
the
way
I'm.
Imagining
is
kind
of
kind
of
difficult
and
I'm,
not
sure
if
there
is
like
any
Middle
Ground
but
I
mean
yeah.
We
can
work
from
there,
though.
That's
a
great
idea.
I
could
start
with
that,
like
what
I
think
it
needs
to
be
in.
We
can
just
go
from
there.
A
Yeah
I
mean
part
of
the
issue.
Is
we
have
we
have
these
I'm,
just
remembering
something
I
can't
oh
I,
think
I
misunderstood
it
back.
Then
we.
B
Yeah
I
mean
yeah,
but
it
does
make
sense
to
me
to
separate
out
the
read
and
the
right
roles.
I
mean
I,
know
they're
all
mixed
up
right
now,
but
maybe
that
should
be
done
because
I
still
think
that,
with
the
way
it's
currently
set
up,
it
is
like
kind
of
I.
Don't
want
to
see
it
like,
like
a
disaster
waiting
to
happen,
but
I
think
it's
just
so
easy
to
to
make
a
mistake.
Then
again,
there's
time
travel
because
I
actually
did
I
did
mess
up
and
then
I
had
to
well
not
on
production.
B
A
A
It's
worth
it
Justin
pointing
out
the
specific
risks,
and
we
can
also
base
it
off.
That
generally
I
mean
I've,
got
I've
got
a
little
bit
more
permissions
too
I've
got
the
sysadmin,
which
is
like
I
could
really
blow
stuff
up,
but
I
just
never
touch
it.
Unless
I
have
like
a
particular
use
case
for
it,
but.
B
A
A
You
know,
and
so
that's
the
that's
the
the
sort
of
question
there
too,
and
the
reason
you
are
using
your
username
rules
is
because
that's
the
default
set
on
your
profile.
You
know
what
I
mean,
in
which
case,
if
that's
the
user
default,
we
could
just
make
a
special
one
and
then
make
that
the
default.
So
you
have
to
step
out
of
that
mode
to
do
anything
other
than
your
normal
workflow,
but
yeah
I'm
worried
that
this
is
too
philosophical
to
to
abstract.
At
this
point,.
A
Yeah,
it's
not
it's
less
a
matter
of
whether
it's
what
I'm
saying
is
making
sense
more
matter
of
the
distance
between
the
language
I'm
using
and
the
actual
you
know,
technical
implementation
of
it.
I
feel
great
right
now.
I
feel
like
it's
we're
abstract,
there's
lots
of
discussion
that
could
happen,
but
it
would
be
faster
just
to
start
with
something
more
concrete.
B
Okay,
fair
point:
seven
new
data
source,
blah
blah-
it's
essentially
about
Cadence,
though
so
I
have
to
ingest
from
GCS
into
raw,
which
I
can
do
with
the
task
like
at
whatever
Cadence
I
want
right.
I
thought:
oh,
okay,
I'm
good,
because
the
user,
so
they
it's
like
security
data
and
I,
think
they
want
an
hourly
Cadence
so
that
if
there
are
security
risks
they
can
quickly
identify
them.
B
That
is
the
impression
I'm
getting,
but
I
realized
that
the
end
state
of
data
needs
to
be
transformed,
and
it's
going
to
have
to
be
done,
obviously
through
DBT,
which
is
a
once
a
day
job.
So
this
hourly
SLA
is
not
achievable.
This.
A
B
A
A
So
this
is
sort
of
like
I
would
say
an
exceptional
use
case
for
us
right
most
the
time
we
have
data
that
we
want
to
bring
into
some
sort
of
common
business
model
like
the
Enterprise
dimensional
model.
We
want
to
have
relevance
across
the
organization
or
at
least
cross-departmental
in
many
cases,
but
that's
not
what's
going
on
here
right
in
this
case
you
have
the
security
team
and
correct
me
if
I'm
wrong,
who
wants
to
have
a
place
to
query
data
which
they
don't
really
have
easily
now.
A
A
Right
so
there'd
be
like
a
security
workspace,
and
that
way
we
can
sort
of
circumvent
the
dimensional
modeling,
which
is
where
you
lower
latency,
really
matters,
because
if
you
have
varying
schedules
that
can
really
mess
up
an
Enterprise
dimensional
model,
sometimes
and
then
the
way
we
would
do
this,
and
this
is
cool
because
we
used
to
try
to
do
things
this
way
and
then
we've
been
slowly
undoing
the
places
where
we
were
doing
this,
which
I'm
still
not
totally
sure.
A
Why
is
we
just
need
to
make
the
extraction
dag,
have
a
task
that
uses
the
DBT
image
and
runs
a
downstream
DBT
command
for
those
models?
So
within
one
dag?
You
would
have
your
extraction,
however
many
tasks
I
would
take
one
or
two
I
would
imagine,
and
then
you'd
have
a
dependent
task
that
would
use
the
DBT
image
and
run
all
the
downstream
models
for
that
source.
So
you
just
need
to
make
sure
you
have
the
source
defined
in
DBT
and
the
models
Downstream
of
that,
and
that
would
be
it
that
way.
A
B
Love
it
makes
sense,
though,
actually
this
extraction
there.
Originally
there
wasn't
going
to
be
a
dag
just
because
it
was.
It
was
just
going
to
be
a
snowflake
task
via
snowflake
right.
So
no
doubt
because
there's
no
API
or
anything
but
I
guess
so
now
the
dag
that
extraction,
dag
and
quotes
would
just
be
the
DBT
transformation
and
we
could
call
it
something
else.
At
that
point,.
A
A
But
external
tables
might
also
be
a
really
good
case
for
this,
in
which
case
with
GCS,
because
we
have
snowflake
I
think
this
might
be
too
much
for
a
first
iteration,
but
you
could
almost
get
the
data
live
with
an
external
table,
but
even
without
we
can
write
just
a
refresh
statement
based
on
alter
table
for
the
external
table
that
basically
goes
and
grabs
or
references
indexes
all
those
new
files,
as
they
appear
and
then
refreshes
snowflakes
references.
A
So
then
that
it
appears
in
Snowflake
and
then
you
could
run
there's
our
DBT
package
that
we're
already
using
four
external
tables
that
basically
handles
that
and
then
uses
it
as
a
source.
That
way
it's
I
get
the
advantage
of
snowflake
tasks,
I,
don't
I
think
on
its
own.
That's
a
great
solution.
The
benefit
of
doing
it.
This
way
is
that
you
could
really
link
together
the
transformation
to
the
extract,
which
would
just
be
a
Refresh
on
the
reference
senses.
C
Yeah
next
argument
would
be
also
that
we
have
this
kind
of
code
somewhere
in
our
code
base
and
the
snowflake
tasks
we
sometimes
forget
about
them
like
there
were
tasks
running
for
licensed
DB,
and
we
just
randomly
here
other
one
figured
out
that
they
are
failing
and
they
were
failing
for
a
year
and
they
were
never
used
anymore.
Yeah.
C
Also,
another
thing
that
I
wanted
to
bring
up
because
I
don't
know
the
reason
why
we
only
have
one
DBT
around.
This
runs
everything
together,
but
I
wouldn't
think
there
is
a
I
mean
that
should
not
be
a
limitation
unless
the
models
are
part
of,
as
you
mentioned,
just
in
so
part
of
the
another
greater
data
model
and
so
on.
It
should
not
be
a
problem
for
us
to
run
DBT
refresh
as
often
as
we
need
for
whatever
data
sources.
We
need
through
another
deck,
a
new
deck.
C
Also,
some
time
ago
we
had
a
discussion
of
whether
we
should
separate
this
DBT
runs
anyway
and
I
think
that
even
separated
some
of
them
for
gitlab.com
data,
they
are
running
on
different
decks
now.
So
what
we
have
there
is
a
DBT
monster
so
to
say,
because
it
takes
like
eight
hours
to
run
and
I'm,
not
even
sure.
If
this
is
the
way
it
should
be
and
I,
don't
think
that
whatever
new
data
comes
up.
C
A
A
We
think
about
I
mean
the
the
the
shiny
word
for
this
right
is
data
mesh,
as
we
think
about
you
know
this
sort
of
goal
of
doing
something
in
that
character.
I
think
this
is
a
good
use
case
of
I
mean
it's
not
exactly
that,
but
nonetheless,
one
of
the
sort
of
parts
of
data
mesh
is
this
idea
that
you
have
function
groups
able
to
operate
independently
of
each
other,
which
is
you
know
radically
different
than
an
Enterprise
dimensional
model,
but
we
can
have
both
and
I
think
we
ought
to.
B
A
But
I
think
with
an
Enterprise
dimensional
model.
It
makes
a
lot
of
sense
also
with
the
way
that
it
it
gets
the
most
Advantage
from
DBT
because
of
the
built-in
dependencies
to
do
it
all
at
once,
but
that
comes
with
quite
a
bit
of
limitation
and
we've
been
breaking
it
up
since
before
I
started
here,
it's
just.
It
started
out
as
one
like
one
task
and
we've
actually
slowly
been
breaking
it
up
over
time
as
our
project's
gotten
larger,
if
that
makes
sense.
A
B
Yeah
makes
sense:
okay,
I
think
it's
really
it
well.
I
guess
one
more
question:
maybe
I
guess
I'll
investigate
this,
but
just
so
that
I
remember:
I
need
to
investigate
using
regular
snowflake
tables
via
the
task
versus
external
tables.
The
pros
and
cons,
I,
think
and
I.
Think
obviously
like
one
pro
would
be
like
you're
saying:
there's
really
some
DBT
logic
I
could
use
with
the
external
table,
but
I
actually
already
did
the
DBT
logic
for
the
regular
table.
B
A
B
A
B
A
Great
I
added
a
item
here
for
okrs,
not
sure
if
there's
anything
to
discuss
but
I
wanted
to
make
sure
we
had
a
live
opportunity
to
do
that.
A
I
think
that's
worth
noting
in
this
meeting
and
maybe
writing
down
here
in
the
agenda.
Right
like
this,
does
there's
a
higher
volume
which
or
another
which
is
at
least
related
to,
if
not
another
way
to
say
a
higher
complexity
of
okrs,
and
so
maybe
that's
worth
saying
that
that's
a
risk
right,
because
if,
if
fa,
you
know
any
material
number
of
these
explodes
in
scope,
which
is
seems
very
likely,
then
we
can
very
quickly
end
up
in
needing
to
rescope.
Quite
a
bit.
C
And
on
the
other
part
which
I
added
I
wanted
to
thank
Justin
Wong,
because
you
did
a
great
job
with
this
Vora
snapshot
investigation
and
ending
up,
ultimately
actually
solving
it,
because
we
accepted
your
Mr
and
I
closed
my
Mr.
So
that
was
great
really.
Thank
you
very
much
for
this.
B
C
Know,
step
by
step,
what
was
it
and
we
even
jumped
on
a
stink
call
together
to
debug,
to
see
one
stupid
problem
with
the
dates?
I'm
very
happy
that
in
the
end,
it
ran
fine
and
we
are
now
sold
done.
Bravo.