►
From YouTube: 2020 05 28 Postgres ai Evaluation Kick off
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
How
we
want
to
report
review
some
of
the
common
goals
and
figure
out
where
we
want
to
record
our
feedback
on
this
one
and
probably
schedule
we
can
look
at
scheduled
or
occurring
time
to
talk
about
this
or
wrap-up
time
at
the
end,
whatever
works
best
for
the
group
here,
so
with
that
yeah,
are
there
any
questions?
Before
we
jump
in
and
let
Nick
kind
of
talk
about,
what's
involved
with
postgrads
AI,
how
we
get
access
all
those
things
right.
B
So
I
already
configured
the
existing
instance
changed
several
things
which
works
for
slack
I
configured
it
for
for
web
version,
web
UI
version
and
CLI,
and
so
on.
So
I
I've
put
instructions
here
and
I.
We
prepared
special,
a
list
of
like
onboarding
tasks,
so
I
can
create
issue
for
everyone.
A
separate
one
with
check
boxes
and
just
to
simplify
starting
and
I.
B
Can
right
now
share
my
screen
and
walk
through
again
right
and
show
you
main
features,
and
if
you
just
go
to
policy
I
and
use
your
Google
Account
like
if
you
go
to
to
PA's
BCI
and
use
Google
Account
assigning
here,
you
will
pop.
Probably
you
will
automatically
join
to
get
lap
organization.
If
not
it
didn't
work
for
address.
I
will
add
your
right
right.
I
finish
my
this.
B
This
demonstration
I
will
add
your
manually
and
you
will
be
here
so
basically,
we
have
several
important
features
here
which
will
allow
you
to
I
hope,
work
more
efficiently
with
performance
and
experimenting
with
new
ideas
such
as
partitioning.
First,
we
have
checkups
here
and
I.
Yesterday,
I
I
generated
new
report.
If
you
check
it's
here,
you
might
may
see
the
same
as
usual,
the
same
reports
but
detailed,
and
we
have
JSON
files
here.
B
Let's
we
have
we
find,
but
in
terms
of
integer
for
problem
and
events
table,
we
have
35
percent.
So
it's
like
centralized
storage
for
checkups
next
important
thing.
We
have
Joe
here
and
you
can
just
start
working
with
draw
the
same
as
in
slack
it
was.
It
will
take
like
15-20
seconds
to
start,
because
instance
is
quite
busy,
and
people
are
working
on
in
slack
and
some
heavy
queries.
But
anyway
it's
it's
working.
Well,
I
have
tested
it
yesterday,
let's
see
while
starting.
B
Let
me
show
you
like:
every
every
command
you
use,
it
will
be
recorded
and
you
see
here
history,
so
you
can
check
various
queries
later
and
you
can
see
it
the
same
details
as
usual,
but
with
permanent
link
and
with
three
options
to
visualize
these
visualization
tools.
I
think
you
know
them
and
they
are
embedded
here
into
the
platform.
B
So
it's
very
like
private
installation,
not
public
as
onyx
of
s4,
explain
de
peche,
como
or
explain
data
back
on
and
you
can
use
this
link
with
with
this
edition,
and
people
will
immediately
see
the
same
visualization
so
and
right
now
we
are
thinking
about
how
to
share
this
information
better.
For
example,
we
thinking
about
adding
adding
like
settings
for
specific
page
to
make
them
public.
So
this
is
everything
requires
effective
identification
using
Google
account
but
optionally.
B
If
it's
useful,
we
can
add
capability
to
make
public
if
to
share
with
everyone,
if
needed,
and
also
maybe
to
somehow,
with
one
click,
put
details
to
comment,
inertia,
quests
or
something
like
this.
So
we
are
open
to
any
suggestions
here
and
we
can
improve
integration.
But
this
kind
of
thing
looks
like
knowledgebase
and
we
can
aggregate
sessions
are
recorded
and
you
can
see
later
all
details.
So
it's
it
was
started
and
this
this
session-
it's
it's
exactly
like
in
in
slack
but
in
in
web
UI,
and
we
can
check
in
database
lab
section.
B
C
B
Right
now
the
disks
are
busy
and
so
on,
so
we
can
do
some
explain.
We
can
do
like,
for
example,
Wexler's
D.
She,
the
list
of
tables
as
usual,
we
can
do
explain
anything
as
usual
right,
and
this
is
private
session,
so
you're
alone
here
in
this
window.
Nobody
else
is
adding
any
comments
here
so
and
permalink
is
here
always
in
there
in
response,
so
you
took
it
and
you
see
it.
B
Sorry
for
my
English
I,
like
slept
on
the
several
hours
and
wake
up
or
wake
up
early
today,
went
to
sleep
very
late
yesterday,
so
like
messing
up
with
words,
okay,
this
is
this.
Is
it
for
JA?
History
are
also
like
playing
visualization
for
ad
hoc.
If
you
need
only
to
visualize
something
these,
these
buttons
will
be
here
as
well.
This
will
is
not
recorded
yet
we're
thinking
about
like
to
make
it
to
be
recorded
and
also
no
history
yet,
but
we
will
edit
soon
so
no
search,
no
search.
B
We
will
have
search
here
because
it's
important,
for
example,
to
see
what,
like
similar
queries
in
the
past,
to
see
how
it
was
done,
like
what
optimization
ideas
were
done
in
the
past
right
now.
You
just
like
Navi,
simple
navigation
here
and
finally,
the
probably
the
main
feature
for
you
is
to
be
able
to
to
create
clone
and
we
full
access.
B
I'm,
not
sure
if
all
of
you
have
have
full
access
to
production
database
because
like
if
you
have
full
access
for
production
database,
you
can
have
full
access
here
and
you
can
create
clone
and
have
separate
positives,
almost
7
terabyte
database
and
do
whatever
you
want
with
it.
So
it's
very
easy.
There
are
three
options.
First,
options.
First
option
is
just
we
can.
We
can
create
here.
B
So
almost
sound
terabytes
expected
cloning
times,
20
24
seconds.
You
can
check
this
box.
This
checkbox
to
the
default.
Behavior
is,
if
you
don't
do
any
activity
on
the
clone,
it
will
be
automatically
deleted
after
two
hours
of
inactivity.
But
if
you
know,
if
you
check
this
checkbox
it
it
will
leave
longer.
B
The
only
concern
here
is
like
if
you
leave
it
for
many
many
days
like
one
day
is
okay.
Two
days
is
okay,
because
we
have
three
days
of
snapshots,
but
if
you,
if
you
use
it
for
longer
than
three
days
roughly
it
will,
it
may
lead
to
out
of
disk
space
problem.
If
you
need
it
longer,
tell
me
we
will
add
disk
space
and
it's
okay
as
well
right.
So
just
this
thing
should
be
still
not
not
permanent
and
Pinnock.
B
A
B
We
should
go
to
a
different
server
because
it's
still
the
same
so
right
now,
disk
space
is
six
point.
Six
gigabytes
and
database
already
exceeded
it,
but
we
have
a
good
compression
there,
like
30%
minus
30%
of
actual
size,
so
we
have
almost
terabyte
of
free
disk
space.
It's
quite
easy
to
increase.
Actually,
so,
if
we
need
more,
we
can
add
more.
Definitely,
you
can
always
see
it
here.
B
What
what's
happening
with
free
disk
space
just
go
to
instances
and
inside
this
instance,
and
see
this
data
so
also
since
it's
quite
new
and
upgrade
of
to
possibly
11
happen.
Yes
late
recently,
no
snapshots
here,
just
one
single
snapshot.
This
will
be
fixed,
I,
think
tomorrow
or
two
today,
and
we
will
have
our
list
nap
shots.
So
we
will
have
a
long
list
of
snapshots.
You
will
be
able
to
choose
different
states
of
database
and,
for
example,
if
you
suspect
some
degradation
of
some
particular
query.
B
You
can
provision
two
clones
during
a
minute
and
compare
explained
plan
for
both
like,
for
example,
if,
if
you
know,
release
happened
like
1:00
p.m.
revision,
one
clone
for
for
12:00
p.m.
and
1:00
for
2:00
p.m.
and
in
compare
them
right.
So
this
is
good
feature
which
is
not
available
in
slack
to
control
performance
regression
for
particular
queries
and
actually,
as
an
idea,
we
could
create
a
set
of
like
most
important
queries,
which
in
turn
like
our
most
frequent
or
so,
and
have
it
like
this,
like
set
of
queries
and
execute
them
to
check
regression.
B
This
this
is
like
requires
some
coding,
but
it's
quite
easy
option:
okay,
I'm
not
going
to
check
this
checkbox,
let's
create
a
clone
and
wait.
30
40
minutes
the
seconds.
Sorry
and
while
it's
create
being
created,
I
will
show
you
how
you
can
configure
CLI
I
will
like
it
will
be
included
to
onboarding
how
to
configure
it.
It
will
be
included
to
onboarding
issue,
so
you
can.
You
can
have
this
CLI
and
work
with
clones
with
snapshots
and
like,
for
example,
I
will
see
this
list
of
available
clones
and,
for
example,
this
is
mrs.
Jeong.
B
B
1440
seconds.
Meanwhile,
I'm
going
to
use
this
psql
P
scale
connection
string
to
to
connect,
but
before
before
I
like
you
cannot,
you
could
not
directly
connect
to
database
because
it's
it's
not
available
from
outside,
so
you
need
to
use
SSH
tunnel
I.
Also
I
will
also
included
two
instructions.
So,
and
the
main
thing
here,
like
you,
need
to
add
two:
you
need
to
pay
attention
to
port,
for
example,
port
number:
five:
let's
use
this
port
by
the
way
this
is
ID.
It
was
automatically
generated
because
I
didn't
provide
it.
B
If
you
provide
it,
it
will
be
used
anyway.
So
port
will
be
six.
It's
always
from
six
thousand
to
six
thousand
hundred,
so
I'm
going
to
create
a
cessation
tunnel
and
now
I
can
connect
to
local
host
and
to
this,
so
you
need
to
provide
SSH
keys.
I
will
put
it
there
and
this
is
how
we
can
connect
right.
But
again
you
need
to
like
this
is
this
should
be
done
under
like
under.
This
should
be
under
the
same
terms
as
access
to
production
because
its
production
data.
B
Sighs,
oh
right,
almost
sound,
seven
terabyte,
for
example,
it's
drop
table
I,
don't
know
like
issues
cascade
Oh
cascade
will
be
like.
Let's
drop
some
some
smaller
table
like
a
views,
reports
which
drop
table
of
use
reports,
we
do
something,
we
don't
see
it
anymore
and
then,
when
we
go,
this
is
different
instance
we
go
here
we
can
issue.
Is
that
comment
allusion,
a
CLI
of
course,
or
we
can
use
graphical
interface?
This
is
okay.
This
is
our
instance,
our
clone.
Let's
reset!
That's
it
and
I
think
we
can.
B
Oh
it's
it's
been
visited,
also
like
during
30
seconds,
and
then
we
will
be
able
to
connect
the
game
right.
It's
being
reset.
Still,
let's
see,
yes,
so
is
eating
again.
Okay,
so
after
30
seconds
we
will
connect
and
see
that
table
is
available
again.
So
that's
how
you
can
iterate
very
quickly.
You
can
do
any
like.
You
can,
for
example,
implement
some
partitioning
schema
check
migration
check
how
we
partition
Rosetta
to
check
again,
Rosetta
it
and
so
on
right.
So
all
you
can
do
any
and
you
can
verify
all
my
mesh
requests.
B
B
D
Have
a
couple,
but
first
of
all,
that's
very
impressive.
Really
nice,
like
looking
forward
to
using
I,
wanted
to
ask
if
there
was
a
chance
that
we
could
make
this
instance.
The
database
runs
on
basically
at
the
same
level
as
the
production
instance,
because
it
would
be
useful
to
sort
of
get
the
same
timings
back
and
currently
database
laughs.
That's
I
think
it's
much
smaller
instance.
It's
not
something
we
could
do.
I
think.
C
B
D
B
D
B
Yeah
sure
we
have
it
in
the
commutation
have
page
about
to
describe
in
the
security
model,
so
on
the
on
the
same
machine
as
as
this
database
exists,
we
there
are
two
components
installed:
one
is
database
lab
component
and
another
is
draw
both
component.
Both
are
riding
in
docker
containers
and
they
are
open
source
and
you
can
check
the
source
code
and
so
on,
so
they
are
exposing
locally,
only
locally
to
ports,
and
then
there
is
Ingenix
with
certificates
and
we
have
port
or
to
do
our
like,
and
we
have
this
port
open.
B
Two
hours
like
we
talked
to
this
port
and
we
talked
to
these
both
components.
Actually,
we
are
working
in
more
secure
model,
so
this
model
is
already
quite
secure,
but
we
are
working
on
switching
from
open
portal
WebSockets
soon,
but
this
is
what
already
is
like
security
team
already
reviewed
this
and
put
approval
already.
So
we
are
like
look,
looks
like
we
are
fine,
but
still
we
think
we
will
close
ports
even
even
protective
anymore,
but
so
right
now
these
ports
are
used
to
issue
two
types
of
meta
commands.
B
If
you
check
the
list
of
like
this
is
right
now,
I
am
issuing
from
my
local
computer
to
the
same,
like
the
same
as
per
platform
is
able
to
do
right,
and
you
see
here
that
we
even
don't
know
database
name
it's
because
we
like
we.
This
is
also
like
security
concern.
We
don't
know
what
kind
of
databases
are
inside,
because
we
don't
deal
with
data
at
all.
So
you
should.
You
should
remember
that
a
byes
name,
github
HQ
production
yourself,
and
so
we
can
destroy
clone.
B
We
can
create
clone,
but
we
cannot
connect
from
outside
to
they
and
see
the
data
at
all.
For
that
you
need
SSH,
key
and
or
GPG
key
learn
and
connect
using
tunnel
right
or
or
go
and
connect
locally,
it's
up
to
you
or
for
some
another
machine
and
get
up
infrastructure
and
the
same
with
job
bot.
Right
now,
it's
like
these
ports.
Actually,
this
is
not
true,
they
are
actually.
B
This
is
one
only
one:
port
443
HTTP
protected
with,
like
with
certificates
and
the
same
withdrawal
week,
a
slack
Bob
slack
and
bow,
and
the
web
UI
in
platform.
They
talk
to
this
port
and
issue
comments
like
like
and
slack
like,
like,
let's
start
session,
please
analyze
this
query:
what
is
the
plan
of
for
this
query
and
that's
it?
So
all
only
meta
commands.
We
know
that
it's
possible
to
to
lower
particular
values
in
database.
B
If
you
do
some
tricks,
for
example,
if
you
do
trick
with
limit
and
put
right,
but
this
is
first
of
all,
you
still
need
to
be
authorized
for
this.
You
need
to
connect
and
provide
your
security
codes
right.
Next,
it
will
be
recorded.
We
have
audit
log
and
you
later,
someone
can
explore
it,
but
so
still
no
one
except
from
like
you
can
do
this
anyway,
and
and
it's
not
possible
to
use
this
to
download
like
whole
database
or
like
to
do
some
message
message
stealing
anyway.
B
So
this
is
like
the
only
concern
about
data
that
can
be
used,
but
it's
already
there
for
a
slack
for
before
before
we
we
started
doing
it.
It's
the
same
for
chat,
ops
tool
as
well
right,
so
you
can
learn
particular
value
and
also
I
I.
Don't
know
how
like
to
learn,
string
value,
it's
very
hard
to
analyze
it
byte
by
byte
after
byte.
It
will
be
noticeable,
but
by
the
way
I'm
going
to
create
out
automatic
detection
of
such
attempts
to
learn
like
data
like
suspicious
attempts,
so
I
think
right
now.
It's
it's
fine.
B
F
B
D
B
D
D
B
D
B
Know
in
slack
we
work
separately
button
in
single
channel,
so
we
see
each
other
actions,
but
still
we
work
with
independent
clones
right.
So
any
changes
we
do
it's
only
for
our
session.
Even
still
we
it's
in
public
Channel
and
everybody
sees
what
other
course,
for
example,
if
I
create
table
other
people
see
that
I
created
it,
but
they
cannot
work
with
it
because
everyone
has
own
clone
independent
fully
right.
Here,
it's
even
more
like
we
are
completely
alone:
it's
not
public
channel.
It's
like
our
own
chat
window
and
independent
clones.
B
Again,
if
you
go
to
like
fusion
SSH
or
jpg
Keys,
if
you
go
and
connect
directly
with
bicycle
or
some
other
postgis
client
again
it
also,
it
will
be
dependent
clones,
so
always
independent
clones.
Actually,
this
is
a
feature
request
to
be
able
to
work
together
like
two
persons,
three
persons
in
one
zone,
the
same
clone
like
to
troubleshoot
and
to
do
some
peer
coding
courses.
This
is
not
yet
implemented,
so
always
independent
clones.
B
C
B
C
B
Right
so
for
grammatical
in
the
commutation
we
have
two
pages,
one
is
CLI.
Maybe
you
will
prefer
this
so
like
to
use
just
CLI,
and
this
is
what
I
already
showed
and
to
like,
create
cone
and
everything
actions
will
be
done
using
regular
postgis
connections.
So
you
use
you
can
use
Ruby
code
or
physical
code.
You
just
need
to
connect,
you
need
to
establish
SSH
tunnel
and
then
connect
to
connect
to
proper
to
proper
clone.
The
the
trick
here
is
to
use
you
may
use
here.
I
used
like
remote
port.
B
Is
this
and
I
use
the
local
port,
the
same
port?
But
you
can
say
you
say
777
always,
and
this
will
change
different
cone,
different
port,
but
this
port
will
be
always
the
same.
So
just
to
avoid
editing
configuration
right,
so
you
mean
just
to
fix
it
like
to
set
up
different
Channel
Tunnel,
and
then
you
can
continue
working
with
the
same
configuration
as
for
API.
If
you
don't
like
CLI,
there
is
API.
You
can
see
swagger
here,
so
CLI
uses
API,
of
course,
and
this
like
we
have
a
pretty
basic
arrest
API.
B
B
B
So
if
you
want
to
replace,
if
you
want
to
reproduce
steps
you
connect
to
physical
and
like
it's
like
normal
database,
you,
you
can
even
use
whole
gitlab
on
this
database.
It
will
be
slow
because
this
network
is
involved,
but
you
can
try
to
use
whole
gitlab
application
working
with
this
production
database
and
test
something
right
or
check
migration.
But
if
you
want
to
issue
SQL
commands
and
be
recorded
in
history,
this
is
different.
This
is
not
possible
right
now,
because
job
API
is
not
available.
Yet
we
I
remember
this
discussion.
B
C
D
D
B
I'd
still,
sometimes
it
still
feels
and
guitarist
or
sometimes
feels
dangerous.
We
discussed
that
already
and
like
bootstrap
script
is
is
like
is
is
configured
as
starting
script
and
if
machine
is
rebooted,
it's
not
her
I'm,
not
having
percent
sure
that
github
CTL
stop
is
performed
right.
So
so
it's
there
is
also
a
problem.
I
always
do
good
laps.
B
It
will
stop
when
I
deal
with
database,
so
sending
emails
is
definitely
not
or
fetching
like
performing
synchronization
with
github
repositories
or
something
like
it's
also
not
a
good
idea,
so
which
this
is
a
little
bit
grey
area
and,
of
course,
special
tests
should
be
paid.
Let's,
let's
think
how
to
protect.
If
we
start
using
this
in
soapy
sequel
is
definitely
safe
way
because
it
will
not
send
emails
or
perform
synchronization,
but
whole
gitlab
application
with
radius
sidekick
working.
E
B
Exactly
and
we
have
a
recipe,
we
have
recipe
prepared
already
how
to
collect
useful
information
about
behavior
of
migration.
Like
it's,
it's
about
out
of
scope
of
this
call,
but
I
will
show
you
and
we
have
it.
I
will
send
them
all
details.
You
will
be
able
to
analyze
how
many
world
data
generated,
how
many
like
it
will
automatically
help
you
to
detect
dangerous
exclusive
locks
which
lasts
more
than
few
seconds.
So
it's
just
like
this
is
already
a
step
towards
having
this,
maybe
in
CI,
and
automatically
check
all
all
migrations.
B
But
right
now
we
can
start
manually
at
least
check
them
manually
with
less
efforts
right,
and
we
were
also
discussing
to
provide
like
better
report.
We
have
already
some
report,
but
if
we
may
have
even
better
analyzing
all
queries
that
happen
during
migration
and
presenting
it
like
small
checkup
right,
like
small
overview
of
workload
that
was
applied
during
migration,
this
is
very
like
will
ready
very
close
to
this
target
and
have
it
and
have
better
control
over
migrations
right.
B
B
D
B
A
B
B
B
A
Implemented
rigid
security,
that's
it
okay!
So,
while
we're
going
through
this
evaluation,
so
as
I
mentioned
earlier,
we
have
an
agreement
with
a
post-grad,
say
I
and
make
for
the
six-month
license
for
folks
that
are
on
this
geo
database
members,
there's
12
of
us.
So
what
we're
going
through
this
evaluation
and
I
didn't
I,
don't
really
have
a
specific
way
we
want
to
evaluate,
but
we
need
to
determine
the
return
on
investment
on
this.
Does
it
make
sense
for
us
to
continue
beyond
the
six
months?
A
Individual
licenses
aren't
super
expensive,
so
I
can
imagine.
We
can
find
ways
that
this
saves
us
more
time
that
it
actually
costs
us.
So
how
do
we
want
to
record
like
the
pros
and
cons
of
this
tool?
Do
we
want
to
do
it
in
the
individual
issue
that
I've
already
created
this
link
at
the
top
and
kind
of?
Do
it
the
way
we
do
database
maintainer
ship?
You
know
this
is
what
I
use
post-grad
say
I,
for
this
is
what
work.
This
is,
how
much
time
it
save.
A
B
A
B
B
E
B
D
I
have
a
question
regarding
the
sort
of
evaluation:
do
we
compare
this
to
what
we
can
do
today,
including
database
labs,
because
I
think
I
I
think
this
is
a
sort
of
an
evolution
of
database
labs
right
and
in
the
end
we
would.
We
will
be
using
both
Gazzara
AI
in
the
long
run.
So
should
we
compare
it
to
database
labs
or
to
what
we
can
do
without
data
back
snaps.
B
Good
question
so
yeah
you,
you
already
tried
it
once
and
if
you
again,
I
don't
have
a
big
preference.
I'm
interested
in
both
approaches
like,
of
course,
I'm
very
interested
to
to
hear
that
this
methodology
in
general
works
for
you
and
better
than
compared
to
absence
of
it,
and
also
if
you
can
compare
just
that,
what
you
used
before
and
we've
compared
to
this
with
a
graphical
interface
and
like
history
enjoy
and
so
on.
It
also
interesting
so,
but
for
decision,
it's
I
think
Craig
should
comment
about
decision
making.
It's
not
on
my
side
here.
B
Right
and
I
in,
like
please
pay
attention
to
history
and
like
because
I
feel
this
is
has
very
great
potential
history
of
draw
sessions,
because
we
can
accumulate
knowledge
base
about
SQL,
optimization
right.
We
can
see
a
lot
of
stuff
from
the
past,
so
once
search
will
be
available
soon,
it
will
be
very
powerful
tool
to
analyze
what
what
we
had
right
now
by
the
way.
Sometimes
we
have
more
than
hundred
sessions
handed
commences
you
to
drop
your
day.
B
So
if,
like
definitely
search
is
needed,
but
if
you
have
some
case
to
check
what
happened
with
similar
queries
in
the
past
its
it
should
be
useful
and
also
visualization
is
already
there
sub
like
convenient
right
so
so
consider
both
I
I
would
say
not
only
data
by
step,
but
also
draw
and
also
actually
check
out,
because
I
can,
by
the
way,
I
can
add
some
all
the
reports
with
all
JSON
files.
I
have
something
called
reg
accumulated
and
it
will
it
make
helpful.
It
may
be
helpful
to
analyze
trends
right
of
growth
of
every.