►
From YouTube: OpenShift Administrator’s Office Hour (Ep 6)
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
A
B
Yeah
so
hello,
everyone,
andrew
sullivan,
technical
marketing
manager,
with
red
hat
cloud
platforms,
so
chris
and
I
are
teammates
as
we
have
been
for
a
couple
of
years
now,
it's
it's
been
a
while
hasn't.
A
B
Months,
yeah,
it's
been
over
a
year
yeah
so
but
we're
happy
to
have
you?
Oh
thank
you.
So
this
is
the
administrator's
office
hour,
which
means
that
this
is
an
ask
me
anything
style
time
period
where
you
can
well
ask
us
anything,
and
I
mean
that
quite
literally.
Well,
hopefully,
you'll
want
to
stay
on
topic
around
open.
A
Shift,
don't
ask
us
the
answer,
for
you
know
success
to
human
life
or
whatever
we
we've
got
42
as
the
general
answer
for
that,
but
otherwise
yeah
literally
ask
us
anything
red
hat
tech
related.
We
got
you
covered
or
we'll
find
you
an
answer
eventually.
Yeah.
B
Yeah,
so
whether
that
and
as
chris
just
pointed
out,
whether
that's
one
of
us
answering
your
question
directly
or
whether
it's
us
doing
some
research
and
finding
the
right
people
inside
of
red
hat
to
get
those
answers,
that's
what
we're
here
for
so
that
being
said,
in
the
absence
of
your
questions
and
again,
please
feel
free
to
ask
us
questions
through
any
of
the
various
chat
or
social
media
mechanisms,
we'll
we'll
try
and
stay
on
top
of
those.
B
In
the
absence
of
that,
we
do
have
a
couple
of
topics,
a
couple
of
things
to
talk
about
today.
So
first
I
do
want
to
highlight,
as
we
kind
of
started
out
at
the
top
of
the
hour
or
at
the
top
of
the
show.
I
was
poking
fun
at
myself
and
my
internet,
so
my
internet
is
having
some
difficulties
today.
B
So
upload
speed
is
phenomenal.
I
love
having
bidirectional
gigabit,
but
download
speed
has
been
suffering
so
hopefully
everything
will
go
fine.
I
was
doing
a
presentation
with
a
customer
yesterday
and
it
just
completely
cut
out,
and
fortunately
christian
happened
to
be
there
and
he
just
picked
it
up
and
rolled
with
it,
because,
oh.
A
Worked
out
well
believe
it
or
not
jp
date,
andrew
does
not
have
comcast.
I
have
comcast,
which
is
why
the
stream
went
out
in
the
middle
of
a
stream
one
day
because
they
upgraded
me
so.
B
Yeah,
I
actually
so
ting
is
my
internet
service
provider,
and
they
have
actually
been
phenomenal
for
the
four
plus
years.
I've
had
them
now
wow.
This
is
a
lot
literally
the.
I
think
this
is
the
second
time
we've
had
an
issue,
coincidentally
they're,
both
within
two
months
of
each
other,
but
the
first
time
it
was
completely
not
their
fault.
Basically,
somebody
got
overzealous
with
a
backhoe
and
cut
a
bunch
of
fibers.
A
So
I
I
cannot
tell
you
how
many
air
force
bases
I
have
seen
disappear
off
the
internet
because
of
that.
B
Yeah
or
I
was
I
was
on
a
basement
time
and
it
was
a
tank
ran
over
the
the
wrong
part
of
the
road
and
crushed
the
the
pipe
underneath.
You
know,
whoops
yeah.
B
B
Yeah
and
and
this
time
it
is,
it
is
still
not
ting's
fault,
at
least
what
they've
told
us,
it's
basically
their
upstream
provider.
So
it's
not
just
my
area,
it
is
a
or
my
isp.
It
is
a
broad
swath
of
this.
This
area.
A
B
Yeah
was
it
t-mobile
t-mobile
had
a
t-mobile.
A
Nah,
no,
I
got
out
of
that
game
when
I
realized
that
I
could
easily
trample
over
anybody's
like
bgp
routes
and
send
all
their
traffic.
D
A
D
B
That's
a
that's
a
good
question.
B
So
that
is,
I
think,
in
total,
there's
something
like
six
and
a
half
gigabytes.
If
you
were
to
pull
them
down
and
mirror
them,
but
that
my
guess
would
be
using
the
offline
option
is
going
to
force.
It
is
going
to
ensure
that
it
does
not
try
and
connect
to
that
outside
registry.
It's
going
to
make
sure
that
it
only
tries
to
go
to
your
inside
registry.
A
Yes,
okay,
so
there's
a
more
questions
pouring
in
now,
I'm
going
to
let
you
tag
this
one
as
I
go,
get
the
link
to
answer
the
other
one
today,
core
os
is
not
truly
immutable.
I.E,
mco
can
make
changes
and
it
will
persist
across
restarts.
A
A
B
Yeah,
so
yes
open
shift
and
I'll
just
detract
slightly
to
answer
that
other
question.
Yes,
it
was
released,
4.6
was
released
yesterday
and
the
link
will
be
the
the
most
succinct
place
will
be
the
the
release
notes
in
the
documentation.
I
already
know
what
it
does,
but
there's
also
the
what's
new
presentation.
B
So
yes,
core
os
is
quote,
unquote
immutable.
So
let's
talk
about
what
that
actually
means,
so
it
doesn't
mean
that
the
operating
system
can't
be
modified.
It
just
means
that
it
is
always
in
a
known
state
and
in
particular
with
rpm
os3
and
the
way
that
core
os
works.
This
means
that
machine
config
operator
this
means
that,
and
now
we
have
the
or
will
soon
have
the
file
integrity
operator
right.
B
We
can
verify
and
we
know
the
state
that
that
core
os
operating
system
is
supposed
to
be
in,
but
we
can
go
in
and
we
can
lay
out
files
right,
you
can
use,
and
if
you,
if
you
were
to
look
at,
do
I
have
a
cluster
up
and
running,
I
don't
think
I
have
a
cluster
running
at
the
moment,
which
is
strange.
Oh,
I
know.
B
B
Is
I
install
core
os
it's
on
a
particular
rpmos3
version
and
then
mco
lays
out
all
of
those
files,
and
that
is
persistent
quote
unquote
until
the
next
update,
at
which
point
it
will
flip
over
to
the
new
rpmos
tree
and
relay
out
all
of
that
information.
That's
needed
inside
of
there
at
least
that's
my
understanding.
B
I
would
like
to
verify
that,
and
I
probably
should
verify
that
through
a
couple
of
different
testing
methods.
B
So
yeah
I
I
was
I'm
sitting
here
going.
Why
don't?
I
have
a
cluster
and
it's
because
I
flipped
over
my
lab
to
being
vsphere
today
and
I
don't
have
a
yeah.
I.
A
Busted
my
cluster
the
other
day
and
I
haven't
stood
it
back
yet,
but
luckily
I
have
one
in
our
handy,
dandy,
part
portal
or
partner
development
system
right.
That's
our
hpds.
A
Just
find
your
name
and
all
the
sea
of
slack.
There
we
go
here,
we
go.
B
Yeah,
so
I
see
there's
a
also
a
question
from
taskeen,
I'm
completely
new
to
openshift
say
welcome
to
the
party
yeah,
but
glad
you
could
join
us
we'd
like
to
learn
about
managed
openshift
on
ibm
cloud.
Could
you
please
give
some
insights
I'm
actually
grabbing
a
link.
A
Trying
to
be
so,
like
I
said
you
do
a
landing
page,
please
use
it
as
like
kick
tires
light
fires
kind
of
thing,
but
that's
going
to
help
you
the
most.
I
feel
like
getting
started,
but
there's
a
lot
of
ways
to
do
ibm,
openshift
right
now,.
B
Yeah
and
so
I'll
take
a
slightly
different
approach,
which
is
so
any
of
the
managed
services,
whether
it's
ibm.
You
know
openshift
on
ibm
cloud,
whether
it's
openshift
on
azure
openshift
on
aws.
You
know
any
of
the
managed
offerings
effectively
what
you're
doing
is
having
somebody
else.
Do
the
operations,
the
administrator
side
of
things,
so
you
can
focus
on
the
developer
side
of
things.
B
Then
that
will
walk
you
through
all
of
those
all
of
those
processes,
all
of
those
features,
functions,
etc.
So
you
can
really
focus
on
things
like
pipelines
or
service
mesh
or
whatever.
That
happens
to
be
and
not
have
to
worry
about.
Well
today,
I
need
to
you
know,
click
the
upgrade
button
and
how's
that
going
to
affect
things
right.
Instead,
the
managed
service
folks
take
care
of
all
of
that.
For
you,
yes,.
B
I
I
don't
know
the
specifics
of
how
any
of
the
managed
services
actually
what
that
looks
like
underneath.
You
know
what
type
of
compute
instances
or
storage
instances
or
whatever
that
happens
to
be.
I
don't
know
what
that
looks
like
underneath
we
could
ask.
Actually
I
would
have
to
find
out
who
the
new
product
manager
is,
because
the
old
product
manager
is
now
part
of
our.
He
just
moved
over
to
our
field
organization,
which
was
patrick,
patrick,
and
I
have
known
each
other
for
a
long
time
since
we.
A
Anyways
so
let's
see
general
onboarding,
I
would
just
head
to
learn.openshift.com
or
red.ht
stream.
Learn,
there's
also
the
what's
new
deck.
That's
right!
Thank
you!
Zinnie!
That's
out
there
on
our
speaker,
deck
page,
having
a
problem
with
words
today.
Can
you
tell
how
too
much
coffee
or
not
enough
coffee,
not
enough
coffee
and
the
fact
have
to
wake
up
and
take
pain?
Meds
is
just
struggling
through
life
right
now,.
A
Anyways
the:
how
did
you
two
get
into
kubernetes
and
openshift?
What
did
your
learning
paths
look
like?
I
think,
that's
kind
of
a
wide-ranging
question
and
I'd
like
to
hear
your
answer.
First.
B
Yeah
one
slight
detraction
c
group
ninja,
if
you
hopefully
we
sort
of
addressed
your
question,
I
feel
like
it
wasn't
necessarily
a
complete
or
a
thorough
explanation.
If
you
could
provide
clarification
on
what
you
mean
by
making
core
s
truly
immutable
like
what
does
that
look
like
to
you
yeah?
Maybe
we
can.
I
am
curious,
so
how
did
I
get
into
kubernetes
so
way?
Back
when
I
was
working
at
previous
employer
who's,
a
storage
company?
B
B
So
I
did
a
a
number
of
projects
basically
saw
that
the
vmware
side
of
things
was
stagnating,
so
they
were
continuing
on
their
path,
but
I
was
on
a
team
of
9
or
10
all
vmware
tmes
and
wanted
to
branch
out,
and
my
manager
is
hey
thanks
for
finishing
up
this
project.
It
was
a
best
practices
guide
that
I
had
rewritten
and
updated.
So
what
do
you
want
to
do
for
the
next
few
months?
Right?
B
Like
I
had
to
compile
sky
dns
right,
it
wasn't
even
core
dns
at
the
time
and
integrates
all
of
those
yeah
in
and
that's
how
I
got
started
and
then
I
found
there
was
a
couple
other
people
in
the
company
that
I
was
friends
with
that
had
also
we
had
the
three
of
us
had
independently
started,
looking
into
containers
and
just
kind
of
went
from
there
and
we
we
started
the
whole
containers
thing
over
there
and
it
continues
to
this
day.
B
I
got
involved
in
openshift
because
of
well
because
I
was
part
of
the
partner
team
at
the
time
and,
of
course,
red
hat
and
openshift,
and
all
of
that
meant
that
I
needed
to
to
learn
about
how
openshift
works.
So
it
was
very
much
a
gradual
thing.
I
have
a
very
heavy
operations
background.
B
Like
I
I've,
I
once
played
a
developer
on
tv.
I
was
terrible
at
it,
so
I
I
moved
over
to
being
on
the
operations
side.
A
That's
awesome,
so
my
journey
was
a
little
different.
I
came
at
it
from
the
community
angle
and
I
saw
kubernetes
kind
of
on
the
horizon
way
back
in
2015-16-ish
and
I
was
like
yeah.
I
really
need
to
learn
this.
This
kubernetes
thing
and
I
sat
down
one
day
I
forget
when
or
what
or
whatever
it
was
pre-max.
So
it
was
2015,
probably
and
bc
before
kids.
A
Older
one
wasn't
with
me
at
the
time,
but
the
she
she
wasn't
with
her
mom
in
ohio,
the
like
the
way
I
learned
ansible.
I
went
into
my
office
four
hours
later.
I
came
out
and
I
was
like
I'm
an
ansible
genius
right.
Like
the
way
I
learned
kubernetes
I
went
into
my
office.
I
stood
up
a
cluster
on
raspberry
pi's
and
eight
hours
later
I
came
out
and
I'm
like
this
is
going
to
change
the
world,
and
this
was
like
you
know,
2015..
So
I
was
very
right.
A
It
was
going
to
change
the
world
and
how
we
do
things,
but
I
already
had
experience
with
openshift
right
like
I
used
the
the
gears
and
cartridges
at
a
previous
job.
A
So
I
was
like
well
if,
if,
if
red
hat
is
seeing
kubernetes
as
the
way
of
the
future,
I
should
probably
at
least
understand
it,
and
so
I
got
involved
in
the
project
specifically
within
sig
kubernetes
sig
contrabex
contributor
experience
I
started
off
in
docs
did
a
little
bit
of
help
there,
and
I
realized
that,
like
getting
the
contributor
experience,
stuff
would
be
better
like
bringing
more
people,
I'm
very
good
at
bringing
new
people
on
the
tech,
and
you
know
my
skills
were
better
suited
for
that.
A
So
I
got
involved
that
way
and
it's
kind
of
just
been
learn.
More
learn,
more
learn
more!
Ever
since,
and
here
I
am
working
for
a
red
hat
on
the
openshift
team,
so
yeah,
it's,
I
kind
of
went
at
it
very
differently
than
andrew
did,
but
still
got
to
the
same
kind
of
spot
and
yeah.
It's
it's
it's
good
to
have
those
two
stories
I
feel
like.
A
So
I
came
at
it
from
the
community
angle
and
then
started
working
for
a
vendor
that
had
an
open
had
a
kubernetes
project
in
openshift
and
you
know,
came
to
red
hat
as
answerable
person
and
then
you
know
worked
on
a
bunch
of
ansible,
kubernetes
type
stuff,
ansible
operators
specifically
and
then
came
over
to
the
openshift
side
to
come
help
andrew
and
company.
You
know
help
folks
learn
openshift,
essentially.
B
Yeah,
I
will
say
so
to
answer
the
second
part
of
your
question
there
about
learning
path.
So
for
me
it's
been
very
different
than
traditionally
well,
I
I
shouldn't
say
that
both
a
little
bit
the
same
a
little
bit
different.
So
I
learned
by
doing
I
I
was
a
customer.
I
was
a
storage
and
virtualization
architect
right,
so
I
started
as
a
junior
administrator
became
senior
became
architect
worked
on
some
very
large
projects
in
various
organizations
inside
of
the
the
government.
B
Some
of
the
people
that
I
worked
with
on
those
now
now
work
at
red
hat,
funny
enough
shocking,
so
yeah
I
when
I
joined
my
previous
employer
to
be
a
tech
marketing
engineer.
I
was
already
intimately
familiar
with
every
aspect
of
that
technology,
so
being
at
a
vendor
and
learning
a
new
technology
is
a
completely
different
experience.
B
Yes-
and
we
you've
probably
heard
us
mention
before-
where
we
don't
keep
like-
I
don't
have
clusters
that
run
for
a
very
long
time
on
a
normal
day,
right
I'll
stand
up
an
openshift
cluster
that'll
run
for
a
couple
of
weeks,
usually,
and
then
it
gets
torn
down
and
rebuilt,
because
you
know
we're
constantly
testing
we're
constantly.
You
know
checking
things:
how
does
this
work?
B
What
does
this
look
like,
but
I
don't
have
an
application
per
se
that
runs
inside
of
there,
so
what
that
means
is
we
have
to
be
very
disciplined
about
understanding
how
things
work
and
putting
them
in
context,
but
also
I've
taken
the
approach
of
well.
I
want
to
have
that
experience.
I
need
to
do
that,
so
I
do
have
for
my
personal
stuff
in
my
house
right.
I
do
keep
a
kubernetes
cluster
that
I
just
use
to.
What's
it
like.
E
B
B
Learning
experience
as
a
result
of
that
so
but
yeah
going
back
to
the
direct
like
what's
your
learning
path.
For
me,
it's
almost
entirely
hands-on
and
I
rely
on
on
my
team
very
heavily
chris.
You
know
this.
You
see
our
internal
chats
all
the
time.
Oh
yeah
constantly
asking
questions,
and
you
know
challenging
my
own
assumptions.
Eric
is
phenomenal
at
at
calling
out.
You
know
you
that
that
was
an
assumption
and
you're
wrong.
B
B
So
testing
hearing
you
for
the
first
time
is
there
any
community
which
I
can
join.
A
Yeah,
so
we
have
a
discord.
I
dropped
a
link
to
that
in
the
chat
there
so
feel
free.
Actually,
where
did
he
come
from
came
from
youtube?
So
let
me
drop
it.
There.
A
Yes,
openshift
commons
is
a
great
place
to
get
started
with
okd,
which
is
the
open
source
version
of
what
we
do
here
with
openshift
okd
actually
doesn't
stand
for.
Anything
anymore
is
my
understanding.
A
It's
just
a
name
like
all
good
names,
it
kind
of
means
nothing.
But
yes,
you
can
join
commons
and
use
use
the
the
commons
working
group
to
kind
of
get
your
c
legs
under
you
with
openshift,
and
then
you
can
kind
of
dive
into
the.
B
Yeah
I
wish
I
had
my
my
cluster
up,
because
that
was
that's.
B
Agenda
to
do
this
week
is
upgrade
my
own
cluster.
B
Yeah,
it's
you
know
in
practice.
It
should
be,
and
did
I
get
that
email
from
you
yet
yeah?
I
did
in
practice
it's.
It
should
be
as
simple
as
selecting
the
right
stream.
You
know
updating
the
channel,
selecting
the
right
stream
and
then
going
from
there.
B
So,
let's
rewind
just
a
little
bit
here,
I'm
looking
back
at
the
chat.
So
yes
openshift
commons,
definitely
a
community
there
as
well
as
our
own
openshift.tv,
discord,
channel
and
community.
I
would
also
suggest,
if
you're
a
part
of
the
cncf
slack
channel
or
slack-
I
don't
know
whatever.
A
All
right
folks
and
we're
back
sorry
about
that
obs
up
froze
on
the
streaming
rig
my
bad.
It
apparently
needed
to
restart
more
than
my
laptop
did.
B
Well,
yes,
but
not
yet,
not
yet
yeah.
So
I
started
to
discuss
this
and
then
we
lost
the
stream
of.
If
you
go
to
the
and
I
think
I'm
still
sharing
my
screen,
you
are
so
if
you
go
to
the
documents
section
and
I
think
it's
under
the
installation
and
update
section
we
have
let's
see
here,
I
think
it's
under
here
installation
process
come
on.
B
Update
service,
no
it's
not
under
there.
So
what
I'll
do
is.
B
B
If
you
have
any
a
new
cluster,
you
can
absolutely
go,
and
today
you
can
pull
down
those
4.6
bits.
You
can
deploy
a
cluster
and
it'll
be
great
for
updates,
though
it's
a
little
bit
different
in
that
it'll
be
available
in
the
fast
channel.
So
first
you
need
to
switch
your
openshift
4.5
to
a
fast
channel,
upgrade
to
or
update
to
the
latest
version
in
there
and
then
switch
to
the
openshift
4.6
fast
channel,
and
then
you
can
update
from
there
for
stable.
B
It
will
be
probably
two
to
four
weeks
before
it's
available
in
the
stable
channel.
The
reason
being
we
want
to
basically
let
it
bake
we
want
to,
let
it
it
sounds
bad
to
say
that
we
want
it
to
get
tested
by
more
broad
use
cases,
but
that's
kind
of
what
happens.
B
You
know
let
the
fast
people
do
a
little
bit
of
testing
and
validation
yeah,
and
that
way
you
know
for
the
stable
people
capitalize
on
all
of
that
and
if
they
find
bugs,
if
they
find
issues,
essentially
they
will
hold
that
release
into
the
stable
channel
until
you
know,
4.6.2
or
3
or
whatever
it
takes
in
order
to
get
it
ready.
For
you
know,
broad
updates
or
upgrades
using
the
stable
channel.
A
Yeah,
I
mean,
and
let's
see,
imperam
points
out
these
two
days
ago,
they've
been
releasing
different
versions,
at
least
it
has
confused
me
until
an
hour
ago
I
couldn't
change
from
candidate
4.6
to
stable
4.6.
Now
I
see
it
as
I
want
it.
Yeah
so,
like
changes
are
happening
right
now.
In
the
background
folks
right
a
release
takes
we
we
intentionally
unintentionally.
However,
you
want
to
phrase
it
kind
of
slow
roll,
the
stable
part,
because
we
understand
that,
if
you're
doing
things
in
the
stable
channel,
you
intend
for
things
not
to
break.
B
B
Okay,
so
let's
jump
back
up
a
bit-
and
I
didn't
forget
about
c
group
ninja
so
similar
to
strings
in
java,
which
is
truly
immutable
and
not
knowing
what
state
would
be
at
any
time
being:
non-modifiable,
okay,
not
even
not
allowing
rights
even
temporarily,
so
in
the
openshift
world.
That
would
mean
any
patch
update,
etc
would
need
to
be
applied
on
a
new
core
os
instance
to
do
the
usual
chord
and
drain
dance
with
the
old
instance
to
the
new
instance.
B
Rpm
os
tree
and
how
we
flip
between
the
images
I
would.
I
would
like
to
do
some
more
investigation
there
and
find
out
precisely
what
is
or
isn't
modified
what
gets
completely
for
lack
of
a
better
turn
right
term
right,
blown
away
right,
whatever
gets
completely
removed
and
then
recreated
by
the
machine,
config
operator
versus
anything
that
remains
there,
so
that
that's
one
that
I
need
to
do
a
little
bit
more
investigation.
There.
A
And
I
feel
like
like
yes,
you
can
get
that
complete
total
immutable
experience
so
long
as
you
don't
reconfigure
anything
with
mco
and
I
don't
feel
like
that
is
the
best
use
case
right
like
if
you're
really
really
looking
for
an
immutable
os
like
100
percent
like
locked
down
in
that
manner.
There's
other
os's
out
there,
but
I'm
not
sure
if
they
can
run
openshift.
A
Right
so
yeah,
you
can't
really
at
least
in
a
supported
fashion.
You
can't
really
lock
the
os
down
and
then
say:
yeah,
I'm
not
making
any
machine
config
changes,
because
there's
so
many
customers
that
we
have
that
have
all
these
different
kinds
of
hardware
that
they
need
to
then
add
on
to
their
cluster
to
make
it
available
and
the
mco
would
then
like
not
having
it.
There
would
make
it
very
hard,
so
yeah
yeah,
it's
gonna,
be
awesome.
If
you
just.
E
A
Trying
to
understand
like
the
need
for
that
high
level
of
immutability
like
if
you
can't
say
what
you're
doing
like,
if
you're
doing
some
secret
squirrel
stuff
fine.
We
understand
that.
But
if
you
need
a
100
immutable
os
in
the
sense
that
you're
stating
right
like
every
single
change,
has
to
have
like
a
new
box.
That's
possible,
but
that's
also
very
time
consuming.
So
I'm
trying
to
understand
the
why
you
know
now.
B
Part
of
the
reason
I
held
off
on
this
a
little
bit
was
because
we
also
have-
and
if
you
can
see
in
my
my
browser
here,
I
have
this
microsoft
word
tab.
This
is
a
little
thing
that
we
made
available
called
a
hardening
guide.
So
this
is
the
draft
cis
benchmark
for
draft.
B
Yeah
and
in
nice
big
letters
here,
it
is
a
draft.
This
is
something
that
we
can
share
with
customers.
So
if
it's
something
that
you're
interested
in,
please
feel
free
to
reach
out
and
we'll
we'll
we
would
want
to.
Definitely
I
see
that
so
this
is
217
pages
of
how
to
harden
your
open
shift
cluster
and
it
goes
into
as
you
would
expect
with
any
of
these
cis
benchmarks.
It
goes
into
excruciating
detail
in
how
to
check
and
how
to
look
at
how
to
how
to
remedy
et
cetera.
A
B
Yes-
and
I
will
also
point
out-
and
I'm
going
to
browse
to
the
repo
here
and
I'm
doing
this
blind-
I
haven't
actually
looked
at
it
before
there
is
a
pull
request.
You
can
see
here
for
the
compliances
code
so
for
the
automated
remedy-
automated,
that's
not
the
word,
I'm
looking
for
the
automated
remediation
tools.
There
you
go
great.
B
To
introduce
all
of
this
into
into
there,
so
that
way,
it's
a
much
easier,
much
faster,
much
more
convenient
way
of
doing
that.
Additionally,
there
is
the
compliance
operator,
so
the
compliance
operator,
which
is
new
to
openshift
4.6,
will
go
through
and
check
a
lot
of
these
things
as
well.
I
don't
think
that
cis
has
been
because
this
is
draft.
We
haven't
released
the
cis
benchmark
for
compliance
operator
yet,
but
that
is
on
the
roadmap.
That's
going
to
be
happening
there
so
effectively
using
the
compliance
operator.
B
B
So
one
of
the
many
ways,
one
of
the
other
things
that
we
can
do
to
help
with
the
security
aspect
inside
of
an
openshift
cluster.
There.
B
A
A
Yeah
rare
right
yeah
have
you
looked
at
the
rel
hci?
Yes,
I
have,
and
it's
very,
very
big
and
there's
ansible
lockdown
as
a
thing
to
help
you
with
sticking
and
cising
your
relate
boxes.
A
A
B
There
we
go,
you
know
literally
read
the
read.
What's
in
front
of
me
here.
A
D
B
This
this
is
a
457
cluster.
Let's
look
at
what
we've
got
here
so
this
one
chris.
I
assume
you
provisioned
from
our
hpds,
which
means
that
it's
in
automated
deployment
happening
in
the
background
that
is,
you
can
see
here
running
on
top
of
aws,
so
rhpds,
if
you're
not
familiar
with
it
or
if
you've
never
heard
of
that
before
it's
the
red
hat
product
demo
system.
So
it's
it's
a
thing
we
make
available
to
employees
and
partners
where
you
can
go
in.
B
You
can
request
access
to
resources,
for
example
an
openshift
cluster
and
then
be
able
to
do
things
like
demos
or
even
learning
right
a
lot
of
times.
We
use
these
when
we
do
workshops
of
let's
deploy
a
number
of
clusters
and
then
hand
over
a
workshop
guide,
and
let
you
you
know
whoever
happens
to
be
in
attendance,
run
through
and
and
learn
all
of
that
stuff.
B
B
So
if
I
switch
this
back
over
to
stable,
I
think
14
14.
15
is
the
most
recent
in
the
stable.
They
must
have
just
changed
that,
so
you
see
with
the
fast.
I
have
access
to
more
more
exactly
so.
Let's
go
ahead
and
hit
update
on
that.
So
essentially
the
way
that
all
of
this
works
is
there
are
specific
releases
that
you
can
upgrade
from
and
to
so.
What
does
that
look
like,
and
I'm
trying
to
remember?
Do
you
remember
the
cincinnati
webpage?
A
B
So
if
I
go
to
github.com
openshift
and
I
look
for
cincinnati
nope,
so
you'll
see
that
there's
a
couple
of
different
repos
that
show
up
here,
so
cincinnati
is
the
service
in
the
background
that
maps
all
of
these
things
together.
B
B
B
B
So
that's
that's
what
we're
doing
here
I
went
from
or
it's
in
the
process
of
going
from
4.5.7
to.16
and
then
from
there.
I
can
make
the
jump
so
over
time.
I
would
expect
that
this
list
will
begin
to
grow,
so
it
may
be
that
they
do
some
additional
testing
validation.
You
know,
qe
does
their
thing
and
maybe
we
add
4.5.15
or
something
like
that.
B
A
And
I
can
drop
those
links
in
there
or
if
you
could
just
drop
the
link
and
chat
to
that
yep.
Thank
you.
Jpdate
asked
how
much
storage
is
support
sitting
on.
I
have
no
idea
to
be
honest
with
you,
it's
probably
in
the
petabytes.
I
I
mean
I
can
only
guess
I
don't
even
know
who
I
would
ping
to
be
like
hey
how.
A
I'm
sure
there's
some
three
object
store
stuff,
yeah,
yeah
yeah.
It's
yeah
like
jp
data
loads,
a
ton
of
stuff
to
it.
So
I'm
sure
other
customers
are
in
the
same
boat.
So
yeah
it's
it's
not
something
like
it's
not
like.
We
have.
You
know
a
basement
full
of
emc
or
netapp
clusters
in
the
tower
or
something
right
like
it's
definitely
block
stored,
someplace.
B
And
what's
interesting
the
so
you
know
red
hat
of
course
has
an
I.t
organization.
They
use
that
for
supporting
internal
services,
so,
for
example,
our
home
directories
and
stuff
like
that,
if
you
have
you
know
for
those
of
us
that
expose
the
tilda
home
web
thing,
that's
actually
hosted
on
a
netapp.
D
B
B
If
I
click
on
fast
4.6.1,
I
can
use
4.5.16
if
I
look
at,
for
example,
stable
4.5,
because
that's
when
I'll
have
actual
information,
so
you
can
see
I
can
update
to
any
of
the
4.5
branches
from
these
4.4
branches
right.
So
that's
how
you
can
kind
of
interpret
this
information
of.
I
need
to
be
on
at
least
four
dots.
What
10
is
the
lowest
one?
I
see
here
to
be
able
to
update
to
4.5.1
through
4.5.15..
B
That's
the
way
that
we
interpret
this
information,
so
useful
information,
valuable
information,
particularly
here
in
the
very
early
days
of
a
new
release,
of
what
what
do?
I
need
to
be
on
to
be
able
to
update
to
the
latest
and,
as
you
see,
the
only
the
only
option
is
fast
4.6
or
excuse
me,
the
fast
channel
for
releases
and
updating
to
4.5.16.
B
So
I
know
we've
got
what
about
15
minutes
left
17
minutes
left.
So
the
other
thing
that
I
wanted
to
talk
about
and
I
apologize.
I
have
not
been
keeping
an
eye
on
chat
very
well.
B
A
B
So,
there's
a
delay
on
the
stable
channel,
yes
yawn,
that
is
correct.
I
thought
it
was
two
weeks,
so
it
could
be
so
it
could
be.
As
short
as
two
weeks,
I
think
it
was
4.4.
It
ended
up
being
like
six
weeks
or
something
like
that
it
was.
It
was
a
pretty
long
time
with
4..
I
think
it
was
4.4
to
do
the
updates
for
stable.
B
B
A
B
It's
the
github
cincinnati
or
or
that
one
yeah
either
either
or
okay
I'll
drop
them
both.
So
the
other
thing
that
I
wanted
to
talk
about
today
and
something
that's
very,
very,
very
easy
to
miss
because
it
hasn't
been
put
into
the
right
part
of
the
documentation
yet
and
that
is
openshift
4.6
has
a
new
way
of
doing
static,
ip
configuration
with
vmware.
It's
the
reason
why
my
lab
has
been
flipped
over
to
vmware
today,
so
I
can.
We
can
dig
into
that.
B
B
B
B
So
previously,
if
you
wanted
to
do
a
static
ip
with
openshift
on
vsphere,
so
essentially
upi,
you
would
have
to
boot
and
install
using
the
iso
installation
method.
The
ova
only
worked
with
dhcp,
so
now
we
can
deploy
using
the
ova
and
then
we
can
add
in
this
information
which,
if
you
see
here
it's
the
exact
same
information,
we
would
append
onto
the
kernel
line.
B
So
now
we
can
add
it
in
as
that
vm
property,
particularly
useful.
If
you're
doing
automated
deployments
of
this
type
of
stuff,
you
can
see
here
they're
using
they
or
they
showed
using
gov
c
if
you've
created
automation
that
uses
terraform
or
ansible
or
whatever
you
want
to
use
it's
as
simple
as
setting
this,
this
particular
up
option.
Excuse
me.
B
B
Where
is
it
coreos,
vmware
ova
here,
so
we'll
go
ahead
and
do
next
sure
we'll
call
it
core
os,
I'm
not
actually
going
to
deploy
a
cluster
here,
I'm
just
going
to
deploy
a
single
node
and
then
configure
that
static
ip
and
we
should
see
it
come
up
and
use
that
static
ipu.
When
it's
deployed.
I've
only
got
one
host
option
here.
So
not
much
of
a
choice.
E
A
Obs
rig
dropped
it's
working
everywhere,
but
youtube
fun.
B
Yes,
that's
the
network.
I
want
to
connect
it
to
so
I'm
not
going
to
provide
ignition
config
data.
So
yes,
normally,
when
you're
deploying
a
node,
if
I
were
deploying
a
cluster,
I
would
absolutely
provide
ignition
config
data
here
right.
I
would
set
this
to
encoding
to
base64
and
I
would
have
a
big
long,
base64
string
of
that
ignition
config.
B
All
of
that,
I
just
want
to
show
this
static
ip
configuration
process
here.
So
that's
why
I'm
skipping
over
actually
deploying
and
configuring
a
cluster
at
the
moment,
so
we'll
hit
the
finish
there
and
we'll
let
it
do
its
thing.
What
we
should
see
here
is
it's
down
in
our
tasks.
Hopefully
this
will
go
through.
B
B
Jp
dave
messing
with
dhcp
on
windows,
server,
yeah
dhcp
is
kind
of
a
pain
I
do
like
the
active
directory
interface
for
dns,
though
I
find
it
relatively
easy,
especially
if
you're
using
like
powershell
to
do
that.
Yeah.
It's
not
it's
not
the
worst
thing
in
the
world.
I'll
put
it.
B
Yeah
you
were
saying
before
we
started
that
the
assisted
installer
wasn't
working
for
you.
A
A
A
And
I
mean
I
feel
like
that
goes
without
saying
the
I
think
the
the
polite
way
eric
phrased
it
once
is
that
I'm
a
power
user
and
then
I'm
trying
to
use
like
consumer
equipment
to
do
like
power
user
stuff,
and
it's
just
not.
D
B
A
Well,
I
think
what
a
lot
of
folks
don't
realize
is
that,
like
I'm,
the
it
team
for
my
entire
family,
so
I
try
to
keep
things
consistent
and
simple
right
so
like
I
can
administer
everyone's
network
and
my
entire
family
from
right
here
and
that's
great.
But
you
lose
a
lot
of
the
power
features
that
you
would
get
with
something
like
a
ubiquity
network
or
something
like
that.
But
it.
A
The
google
networking
stuff
works
great
until
it
doesn't
I'll
just
leave
it
at
that,
but
it
used
to
be
that
host
could
self-define
their
own
dhcp
dns
host
name
and
now
that
functionality
is
gone,
and
I
don't
know
why
or
if
there's
some
release
notes.
That
will
tell
me
because
well,
google.
B
A
A
B
My
boss,
and
by
my
boss
I
mean
my
wife-
would
not
allow
me
to
have
that
much
noise
in
the
house
so
right,
which
I'm
I'm
comfortable
with
that.
I
don't
want
that.
Much
noise
in
the
house
either.
A
B
After
sitting,
through
this
five
minute
long
process
just
to
deploy
an
ova,
I'm
thinking
I
need
to
update
from
my
everything's
running
on
one
gigabit
at
the
moment.
So
maybe
I
should
update
to
10
gigabit
and
it
seems
to
be
getting
to
that
time.
A
Well,
yeah,
you
realize
that
very
quickly,
if
you
have
gigabit
network
everywhere,
any
one
device
can
saturate
your
120
meg
up
link.
So
with
a
gigabit
uplink,
though
you
probably
have
a
little
more
leeway,
but
this
seems
like
internal
and
is
kicking
your
ass.
B
Yeah,
it
is
it's
all
internal
yeah,
so
it's
and
it's
funny
because
it's
going
from
my
workstation
to
the
esxi
server
to
another
server,
that's
hosting
the
storage
for
it.
So
it
takes
it
a
minute.
It's
a
little
slow
there.
We
go
all
right.
So
now
we
have
our
ova
all
right!
Excuse
me:
we
have
our
virtual
machine
inside
of
here.
B
So
if
we
follow
the
documentation,
what
I
need
to
do
is
set
this
particular
value,
or
this
particular
virtual
machine
configuration
to
this
string
and
I'm
literally
going
to
copy
this
string.
The
100
network
is
not
one
that
exists
in
my
house,
which
means
that
if
it
comes
up
successfully,
I
should
be
able
to.
B
It
should
show
this
ip
address,
and
we
know
that
it
is
getting
that
from
here.
So
I'm
going
to
copy
this
first
and
then
we
come
over
here
and
we
go
to
our
virtual
machine
and
we
edit
settings
go
to
vm
options
and
oops.
I
want
the
advanced
down
here,
edit
configuration
and
then
I
want
to
set
a
configuration
parameter.
B
B
B
B
Like
I
said
I
haven't
done
this
before
this
was
this
is
a
learning
opportunity
for
both
me
and
you.
B
Or
maybe
it
just
wants
me
to
it?
Maybe
it
just
wants
me
to
actually
install
a
cluster,
that's
possible
too,
because
it
would
be
looking
for
an
ignition
config
to
then
pull
its
config,
and
maybe
that's
when
it
actually
looks
to
do
that.
Configuration
not
not
entirely
sure
so
we'll
have
to
wait
and
see.
B
B
Yeah,
so
with
only
three
minutes
remaining
in
my
apologies
for
this,
this
little
demo
failing
here
so
with
three
minutes
remaining,
I
do
want
to
say
thank
you
to
our
audience,
really
do
appreciate
you
tuning
in
with
you.
Staying
with
us
appreciate
the
questions
very
very
much.
If
you
would
like
to
ask
any
questions
that
you
didn't
get
a
chance
to
today
or
if
you
don't
feel
comfortable
sharing
them
in
the
public
chat
and
having
us
discuss
them,
please
feel
free
to
reach
out.
You
can
reach
out
on
social
media.
B
I
think
to
use
a
phrase
of
the
kids
these
days
my
dms
are
open
or
what
does
it
slide
into
the
dms?
That's
what
it
is
something
like
that.
I
don't
know
I'm
I'm
too
old
for
that.
E
B
You're
you're
welcome
to
reach
out
to
me
at
practical,
andrew
all
one
word
on
twitter.
You
can
also
reach
out
to
me,
via
email,
firstname.lastname,
redhat.com,
happy
to
help
happy
to
answer
those
questions.
However,
we
can,
and
if
you
have
anything
that
you
are
interested
in,
if
there's
any
subjects
that
you'd
like
us
to
cover
on
the
show,
I
would
be
very
very
welcome
for
that
as
well,
very
open
to
that
as
well.
B
A
Right,
yeah,
yeah,
yeah,
sorry,
the
fourth:
we
will
see
you
then
so
happy
kubernetes
and
open
shifting.
Until
then
see
you
soon.
D
F
Well,
hello,
everybody
and
welcome
to
another
openshift
commons
briefings
we're
really
pleased
that
you've
joined
us
today,
and
I
want
to
welcome
the
folks
from
crunchy
data
today,
they're
going
to
regale
us
with
some
wonderful
cloud
native
gis
applications
using
crunchy
data
on
openshift,
and
we
have
with
us
today-
steve
pusty,
paul
ramsey
and
adam
tim
steve
is
one
of
my
favorite
people,
long
time,
red,
hatter
and
fellow
evangelist
back
in
the
day.
So
we're
really
thrilled
to
have
him
back
here
and
we've
always
very
much
loved
our
wonderful
partners,
crunchy
data.
F
So
I
think
this
is
going
to
be
a
lot
of
fun.
Some
really
great
demos
are
in
the
offing
so
steve.
Why
don't
you
introduce
your
team?
Take
it
away
and
we'll
have
live
q
a
at
the
end.
So
thanks
again
for
joining
us,
everyone.
G
Thanks
diane,
it's
really
great
to
be
able
to
hang
out
with
you
again
too.
I've
missed
it.
So
I'm
steve
poosty,
I'm
the
lead
of
developer
relations
at
crunchy.
Paul
ramsey
is
the
ex.
G
Is
it
I
think
it's
executive
engineers
senior
engineers
something
really
high
up
in
the
and
the
for
the
purposes
of
this
talk,
the
important
part
about
paul
is:
he
is
one
of
the
founders
of
post
gis
he's
the
lead
committer
and
he
and
martin
davis
are
the
two
that
built
the
crunchy
data's
crunchy
spatial
platform
which
we'll
be
demoing
today,
adam
tim,
he
is
a
wonder
guy
who
goes
between
sales
and
consulting
and
grant
writing
and
pm'ing
and
doing
all
sorts
of
stuff
he's
a
former
nga.
G
I
don't
know
if
he
was
an
analyst
or
not,
but
he
was
at
nga,
so
he
cares
very
deeply
about
spatial
stuff,
but
also
being
at
nga.
He's
also
was
on
the
beginnings
of
cloud
native
stuff
that
the
government
agency
was
doing
so
today
we're
going
to
talk
to
you
about
some
cool
stuff
that
we've
done
from
actually
from
soup
to
nuts,
with
crunchy
data
postgresql,
some
cloud,
spatial
stuff
that
paul
will
introduce
in
a
little
bit
and
we'll
be
running
it
all
on
openshift.
So
let's
go
ahead
and
get
started.
G
Oh
I
got
to
click
on
the
actual
window,
all
right!
So
there's
us!
Oh
sorry,
paul's
the
one
in
the
middle
adam's,
the
one
who
looks
like
he
might
be
an
nga
agent
anyway,
on
the
on
the
far
right-
and
that's
me
on
the
far
left.
Actually,
it's
no
hair
framing
the
hair
guy
is
what
we're
actually
aiming
for
with
that
effect.
G
What
we're
going
to
do
we're
going
to
talk
about
some
cool
stuff
with
coop
postcresql
and
spatial
microservices,
and,
of
course,
since
we're
on
the
commons
one,
our
version
of
cube
is
going
to
be
openshift
and
then
we're
going
to
use
these
pills
pieces
to
build
some
cool
stuff
in
real
time.
So
we're
gonna
spin
up
all
well
part
well,
you'll
see,
but
it's
almost
exactly
real
time.
Loading
data
is
slow,
so
we're
not
gonna
show
you
loading
data
other
than
that
everything's
in
real
time.
G
G
We
have
some
of
the
leading
contributors
to
postgresql
code
like
paul,
but
also
stephen
frost
tom
lane,
joe
conway,
approximately
one-third
of
the
contributors
to
postgres
right.
So
we
can
both
influence.
We
can.
We
can
do
stuff
in
the
code,
but
we
can
also
you
know
we're
deeply
connected
to
the
community
where
everything
we
do
is
100
open
source
and
runs
on
postgres
right.
So
no,
no
open
core
everything
we
do
is
fuel
open
source.
We
are
claimed
just
like
red
hats
is
engineering
quality
support
right.
G
So
that's
one
of
our
big
ones.
We
have
you
get
calls
with
skilled
post
rescue
engineers,
24
7
365,
usually
20,
minutes
or
less,
and
so
one
of
the
examples
would
be
like
if
you
had
something
going
on
with
post.
Yes-
and
you
were
one
of
our
customers-
you
probably
wouldn't
get
paul
on
the
immediate
call,
but
you
get
paul,
probably
on
the
second
call
after
they
figured
out
what
the
problem
was
and
then
one
of
the
other
big
things
for
us
is
commitment
to
security,
conscious
enterprises.
G
You
know
we
have
a
lot
of
security
clearance
and
we've
built.
We
worked
on
the
stig
for
postgres,
so
we've
done
a
lot
with
security
and
we're
cloud
and
platform
agnostic
and
we've
been
working
with
containers
since
day,
one
with
openshift
actually
with
openshift.
We
were
building
containers
back
when
openshift
was
not
running
on
cube,
but
we
quickly
made
the
the
conversion
to
cube
and
open
shift,
and
then
we
have
been
doing
an
operator
for
quite
some
time
and
you'll
see
it
in
action
today.
G
So
here's
just
some
stuff
about
how
we've
worked
with
red
hat
before
ansible
tower
now
uses
postgres
behind
it
and
satellite
also
uses
postgres
behind
it,
and
we
are
one
of
the
preferred
partners
if
you
want
postgres
behind
satellite
or
ansible
tower
right
and
we
anywhere
else
you
want
to
do
if
you're
running,
like
jira
or
if
you're
running
what
was
the
j
frogs.
I
don't
remember
jfrogs,
but
behind
j
frogs
we
can
run
behind
j
frogs
as
well
for
aha.
G
So
here's
what
ashesh
the
gm
cloud
bu,
I
think,
he's
a
senior
vice
president.
Now
for
red
hat
and
cloud
in
general
and
here's
what
I'll?
Let
you
read
it,
I'm
not
going
to
read
it
out
loud
well.
Actually
I
should
in
case
there's
someone
that's
hearing,
that's
vision,
impaired.
We
are
very
excited
to
see
the
great
results
of
the
work
that
crunchy
data
has
put
into
containerizing
and
operationalizing
postgres
in
the
openshift
environment
having
effectively
built
a
database
as
a
service
infrastructure.
G
Crunchy
data
has
made
it
easier
for
application
teams
to
use
the
power
of
postgresql
in
their
modern
architectures.
So
that's
from
and
then
chris
morgan,
the
lead,
tech,
marketer
and
one
of
my
good
friends
on
red
hat
cloud.
This
is
about
our
operator
certification
and
this
is
just
to
attest
how
long
we've
been
doing
the
operator
on
cube
and
with
red
hat
right
so
as
they
and
we're
actually
in
the
catalog
as
well
as
they
have
for
several
years.
G
If
you
know
about
operators
which
you
should
and
since
you're
on
the
comments
call,
we
are
actually
at
phase
five,
so
we're
on
the
autopilot
mode
for
our
operator
and
you'll,
see
a
demo
of
our
operator
adam's
going
to
demo
that
in
a
little
bit
after
paul
does
his
thing.
So
I
tried
to
go
through
that
relatively
quickly,
because
I
know
most
people
don't
want
a
lot
of
marketing.
G
E
So
it's
not
uncommon
in
our
connected
world
for
you
to
want
to
put
a
map
on
a
computer,
but
if
you're
coming
to
the
problem
for
the
first
time,
it's
all
too
common
to
be
sucked
into
the
tar
pit
being
told
you
need
a
great
deal
of
different
new
extra
complex
software.
To
do
it,
you
need
a
geographic
information
system.
E
My
friend
you
need
a
gis,
but
actually,
if
you're,
building
an
app,
probably
you're,
building
an
app
using
a
database
underneath
it
you've
got
postgres
in
your
app,
which
means
you
can
easily
have
post
gis
in
your
app,
which
means
you
already
have
gis
effectively.
Post
gis
gives
you
all
the
functionality
of
a
gis
without
having
to
bring
the
gis
in
your
infrastructure.
You
already
have
the
functionality
in
your
database
in
your
spatial
database
postgres
with
postgis.
E
That
means
you
can
answer.
Gise
questions
things
like
what
parcels
within
a
kilometer
of
the
file.
That's
a
gise
question.
It's
a
question!
You
can
visualize
in
a
mappy
sense
what
things
are
a
thousand
meters
of
the
fire,
and
you
can
express
that
the
answer
to
that
question
in
sql.
If
you
have
a
spatial
database
and
not
very
much
sql
either,
this
is
one
sql
statement.
It's
five
lines.
It's
a
few
enough
words.
I
can
count
them
on
two
hands.
E
How
far
did
the
bus
travel
last
week?
That's
also
a
gisc
question.
You
can
think
about
it
in
a
map.
You
can
express
it
in
spatial
sql,
finding
the
nearest
truck
to
the
transformer.
That's
actually
a
somewhat
more
complex
question,
because
I've
got
two
things
going
on
there.
I
got
transformers
and
I
got
trucks.
Can
I
find
the
closest
thing
using
just
spatial
sql?
Yes,
I
can
not
even
very
much.
E
This
is
an
indexed
nearest
neighbor
query:
what
trucks
are
in
the
service,
depots,
trucks
and
service
depots,
two
layers:
this
is
a
an
example
of
a
spatial
join
again
you
can
express
it
in
sql
and
hardly
any
sql
at
that,
so
you
can
get
all
the
functionality
of
a
gis
without
the
gis
when
you've
got
postgis
in
your
postgresql
database,
so
you're
building
a
web
app
you
haven't
sold.
You
think.
How
do
I
get
this
wonderful
thing?
E
I'd
like
to
use
this
this
gis
without
a
gis
that
you
speak
of
paul,
how
can
I
get
this
wonderful
thing
and
it's
in
the
postgres
database,
so
you
just
have
to
connect
to
port
5432
and
you're
building
a
web
app
you're,
building
an
app
that
runs
sides
and
runs
inside
your
browser.
The
things
you
connect
to
from
that
web
app
are
web
things
you're,
connecting
on
port
80
or
port
443
for
ssl,
so
you've
got
this
disconnect
here.
You've
got
a
web
browser
sitting
out.
E
There
wants
to
connect
things
on
https.
You've
got
the
database
back
here
which
likes
to
talk
to
its
clients
via
the
postgresquery
query
protocol
on
port
5432.
How
do
you
join
these
two
things
together?
Historically
again,
you
might
say
I
need
a
gis,
or
rather
I
need
a
gis
middleware,
and
we
looked
at
that
and
we
said
you
know
you
can
do
that,
but
those
the
gis
middlewares
that
are
that
existed
and
still
exist
are
are
quite
heavy.
They
do
a
lot
of
things
and
we
already
have
all
the
things
back
in
the
database.
E
That
is,
we
want
to
be
able
to
send
a
query
over
http
and
we
want
to
get
back
one
of
two
things:
either
map
tiles,
so
we
can
make
a
visual
display
or
geojson,
so
we
can
get
a
little
bit
more
interrogation
into
the
features
and
make
something
a
little
more
interactive
with
features,
and
those
are
the
two
things
we
want
back
to
be
able
to
put
into
a
map
panel
to
be
able
to
build
into
a
spatial
application.
So
we
asked
ourselves
looking
at
sort
of
the
big,
weighty
spatial
middleware.
What
is
this?
E
Is
that
it's
easy
to
deploy
it's
easy
to
take
a
small
thing
and
put
in
a
container
because
it
doesn't
have
a
lot
of
moving
parts?
It's
easy
to
reason
about
a
small
thing:
it
only
does
one
thing,
so
you
can
figure
out
how
it
works
and
because
we're
doing
all
this
in
open
source.
Another
real
big
advantage
of
a
small
thing
is
it's
easy
to
contribute
to
contributors
can
come
in.
They
can
look
at
the
relatively
small
code
base
and
say
I
need
to
do
this.
E
So
what's
the
smallest
piece
of
software
that
can
solve
this
problem,
and
so
we
started
building
these
small
pieces
of
software,
we
built
two
to
start
with
we're
calling
it
post
just
for
the
web
post.
Just
for
the
win.
Pg
tile
serve
is
a
very
small
piece
of
software
that
provides
http
endpoints,
so
the
web
browser
can
talk
to
it
and
exposes
the
tables
and
functions
in
the
postgis
database.
E
Pg
feature
serve
same
idea,
except
that
it
returns
geojson
collections,
as
the
response
gives
you
endpoints
for
tables
and
functions
and
returns
so
post
just
for
the
web
is
gis.
Without
the
gis
it
lets
you
take
your
web
app
bind
it
to
all
the
smarts,
the
gis
with
other
gis
and
postgis,
and
build
rich
web
applications
without
having
to.
E
And
functions
and
return.
E
Back
over
to
whoever's
next
so
post
just
for
the
web
is
gis.
Without
the
genetics
it
lets
you
take
your
web
app
bind
it
to
all
the
smarts,
the
gis
with
other
gis.
E
And
functions
and
returns.
C
So
the
demo
architecture
that
I'm
going
to
show
you
here
starts
obviously
with
post,
just
as.
C
C
So
we're
going
to
do
that
real
time
here
in
just
a
minute
and
then
once
I
walk
through
how
to
build
that
up
in
real
time
in
open
shift
steve
and
I
are
going
to
show
how
you
can
do
editing
in
real
time
in
a
cloud-based
database
and
it's
actually
going
to
be
leveraging
another
feature
found
in
our
postgres
operator
for
kubernetes
that
allows
for
multi-cluster
replication
between
two
kubernetes
clusters.
C
But
now
we
have
two
separate
clusters
of
openshift
stood
up
and
we
have
two
two
versions
of
the
application
one
running
in
each
cluster,
and
but
only
one
of
the
applications
is
the
primary
application.
But
data
is
being
automatically
replicated
and
fed
over
to
our
standby
cluster.
C
So
if,
for
whatever
reason,
something
happens
to
your
your
entire
cooperator,
you
have
the
ability
to
fail
over
and
restore
your
your
application
in
very
short
order
in
your
standby
kubernetes
cluster,
and
also
some
of
the
inherent
features
built
into
our
operator
that
that
come
out
of
the
box
are
high
availability
for
your
postgres
database
and
we're
gonna.
We're
gonna
show
how
an
auto
failover
works
over
just
in
in
our
primary
cluster
as
well.
C
So
that's
what
I'm
gonna
show
you
here
in
just
a
second.
Let
me
go
ahead
and
exit
out
of
this
view
all
right.
So
what
you
should
what
everybody
should
be
seeing
here
is.
I
have
my
two
terminal
windows
where
I'm
going
to
be
doing
the
majority
of
my
work,
and
then
I
have
my
openshift
cluster
ui
here
on
the
left
side
of
the
screen
and
as
I'm
working
in
my
terminal,
you're
stretching
causing
items
to
jump
in
yes,
steve.
F
C
Yeah
correct,
let
me
let
me
share.
C
I
was
saying
we've
got,
we've
got
my
my
two
terminal
windows,
where
I'm
gonna
be
doing
all
the
the
main
work,
but
you're
gonna
start
seeing
all
of
our
pods
drop
into
my
open
shift,
ui
over
here,
as
things
start
getting
built.
So
what
what
are
we
gonna
end
up
building
so
this
is.
C
This
is
a
view
of
the
end
user
application
that
we're
going
to
end
up
building
it's
a
it's
a
demo
application
that
we
we
built
just
to
show
off
the
power
of
crunchy,
spatial
and
posters
and
and
kubernetes,
and
what
you're
going
to
see
is
it's
a
purely
fictitious
application
that
we
thought
might
be
representative
of
how
a
fire
administrator
in
santa
cruz
county
would
be
managing
partial
information
to
denote
whether
or
not
a
parcel
is
considered
a
fire
hazard
or
not
a
fire
hazard.
C
So
what
you're
going
to
see
is
all
these
green
and
red
lines
are
tax,
partial
information
that
we
actually
downloaded
from
the
santa
cruz
county
website
gis
website,
and
we
loaded
it
all
into
the
database
and
then
we
fed
it
out
through
pg
teleserve
and
we
interact
with
it
through
pg
feature
serve
and
what
what
allows
you
to
do
is
select
on
a
particular
parcel
and
then
it
allows
you
in
this
view
it
would
it's
meant
to
represent
a
fire
administrator
being
able
to
change
whether
or
not
a
parcel
is
considered
a
fire
hazard
save
that
off.
C
And
then
this
say
the
update
is
actually
done
directly
in
the
database
and
then
the
next
view
is
same
map
same
view,
but
only
in
this
case
a
fire
administrator
may
want
to
let
a
a
number
of
residents
in
a
particular
area
know
that
a
parcel
is
being
updated
to
be
considered
a
fire
hazard.
So
it
allows
you
to
do
a
geospatial
query
again
directly
against
the
data
in
the
database,
and
none
of
this
is
being
done
in
the
ui
and
turn
all
the
parcels
that
interact
or
intersect
with
that
query
radius.
C
So
obviously,
we've
already
got
this
application
stood
up
and
running.
I
just
wanted
to
let
you
get
a
sense
of
what
we
should
see
working
when
the
when
the
demo
is
completely
done
here.
C
C
So
I'm
going
to
go
ahead
and
switch
to
my
demo
project
and
I'm
going
to
start
by
actually
building
our
ui
first,
since
the
ui
is
actually
the
part
that
takes
the
longest
to
build
in
this
because
we
are
actually
building
from
source
using
the
source
to
image
feature
shift.
So
I'm
going
to
go
ahead
and
drop
that
command
in
here
and
you
can
see
I'm
just
grabbing
it
from
my
own
git
repo,
a
particular
branch
and
telling
it
to
use
the
source
strategy
and
I'm
going
to
go
ahead
and
kick
that
off.
C
Might
take
a
minute
there
we
go
so
now
it's
actually
going
we'll
we'll
see.
We
should
see
a
build
config
pop
in.
C
C
There
we
go,
there's
my
build
config
that
popped
in
and
we
should
see
the
build
start
going
here
in
a
second.
So,
while
that's
while
that
build
is
going,
this
react.
Application
needs
a
couple
more
a
couple,
more
things
to
make
sure
it's
running
appropriately,
so
I'm
going
to
go
ahead
and
set
a
the
environment
variables
in
the
deployment
config
for
this
particular
application,
that's
necessary.
So
this
is
in
particular
germain
to
spatial
or
anything
like
that.
C
G
Your
screen
is
well
behind
your
voice.
What
we're
seeing
on
the
screen
just
to
let
you
know
is
that
fire
notification,
it's
still
showing
that
screen.
We
have
not
yet
switched
to.
D
F
And
adam,
so
I
it
might
be
that
your
your
your
wi-fi
is
a
little
lag
time.
So
maybe
turn
off
your
video.
If
you
don't
mind
and
that
might
be
the
load
on
your
computer,
so
I'm
gonna
do
that.
So
if
you
don't
see
his
face
now,
that's
my
doing.
G
G
You
can
now
you
can
start
over
with
from
the
part
where
you
entered
there's
adam
entering
the
command
to
you,
he's
logging
in
to
the
demo
project
and
he's
building
this
oc
new
app,
which
people
should
know
he's.
Building
this.
G
This
is
a
a
node,
a
javascript
application
that
we
built
using
react
on
the
front
end,
and
then
you
can
see
that,
like
adam
was
saying
before,
there's
the
build
config
coming
up
and
that
build's
gonna
be
kicked
off,
so
that's
going
to
be
doing
the
whole
source
to
image
process
inside
of
the
openshift
build,
and
so
that
builds
running
adam.
Where
did
you
leave
off?
G
Oh
no.
Now
he's
setting
he's
setting
some
environment
variables
right
for
that
app.
You
can
see
that
there
on
the
right
side
he's
just
doing
the
oc
set
env
for
the
deployment
config
for
spatial
web
demo
and
that's
updated.
C
Yep
so
we'll
go
ahead
and
now
we're
caught
up
to
real
time
so
I'll
I'll
start
walking
through
here.
Hopefully
me
turning
off.
My
video
is
is
going
to
help
keep
things
a
little
bit
closer
to
real
time
but
steve,
please,
let
me
know
if
it
starts
lagging
in
any
way,
shape
or
form
perfect.
So
the
next
thing
I
want
to
do
in
this
project
space.
Let
me
go
back
over
to
my
pods
view.
Here
is,
and
you
can
see
every
time
I
made
an
updated.
C
It
created
a
new
deployment
for
that,
and
so
the
the
next
thing
I
wanted
to
be
able
to
do
is
add
in
that
static
map
tile
server
that
I
mentioned
we
needed
to
have
so
again.
This
is
just
a
custom
map
tile
server
that
we
built
for
this.
This
demo,
you
can
replace
it
with
your
own
map,
tile
server,
there's
a
bunch
of
different
map.
Title
servers
out
there
again,
not
particularly
germane
to
anything
related
to
coasters
or
the
spatial
product,
but
something
necessary
for
the
overall
application.
C
All
right.
So
now
we've
got
the
map
tile
server
and
we've
got
our
ui,
but
we
don't
have
any
data
actually
up,
but
we
actually
don't
have
a
database.
So
if
I
went
to
the
just
the
view
real
quick
here,
we
you
can
see.
We've
got
the
spatial
web
demo,
which
is
our
front
end,
but
but
it
actually
won't
have
anything
in
it.
So
I
won't
even
bring
it
up
because
you'll
see
just
a
a
blank
app
there
with
no
map
on
there.
C
C
Which
you
can
have
you
can
have
the
pgo
client,
it's
just
a
cli
tool.
You
can
have
that
loaded
on
your
local
machine.
We
also
do
offer
oops
the
ability
to
deploy
it
actually
into
the
cluster
where
you
have
pgo
deployed.
So
that
way
you
don't
have
to
be
dependent
on
anything
locally
on
your
machine.
So
now
I'm
remoted
into
the
client
itself,
and
here's
where
I
can
actually
issue
the
commands
to
deploy
the
cluster
that
we
need
so
first
off.
Let
me
find
where
I'm
at
here.
C
I'm
going
to
give
my
postgres
cluster
a
name
called
fireapp,
I'm
going
to
tell
it,
which
particular
image
of
the
crunchy
containers
that
I
wanted
to
use,
and
that
is
the
crunchy
postgres
gis,
so
that
that
means
that
it
comes
with
post-just
pre-installed
nrha
container,
I'm
going
to
create
a
database
as
part
of
this
command,
I'm
going
to
call
that
database
fire
data,
and
I'm
going
to
tell
kubernetes
that
I
want
one
replica
created
automatically
on
this
and
then
I
can
define
my
password
for
postgres
as
part
of
this.
C
If
I
left
this
off,
then
kubernetes
would
go
ahead
and
automatically
create
a
random,
a
random
secret
and
in
the
the
cluster-
and
I
wouldn't
have
to
specify
this-
I
just
specified
the
password
for
the
sake
of
this
demo,
just
to
make
it
a
little
bit
cleaner.
So
I'll
go
ahead
and
kick
that
off
and
you
can
see
it
went
ahead
and
kicked
off
a
workflow.
And
if
I
do
a
pga
show.
C
We
can
see
that
it's
in
the
process
of
creating
the
initial
cluster,
and
then
it
has
the
the
replica
running
or
starting
to
run
as
well,
and
you
can
see
over
here
the
first
container
just
created.
We've
already
got
our
backrest
repo
created
here
and
now
it's
spinning
up
our
our
primary
cluster
node
and
once
this
is
done,
we'll
see
another
cluster
pop
in
here
as
well.
C
So
I'm
going
to
give
that
a
second
to
run
and
then
the
other
thing
I
need
to
do
is
again
based
on
the
way
we
have
this
data
structured.
I
need
to
create
one
user
again.
This
is
all
being
done
through
our
our
postgres
operator
and
I
need
to
create
a
user
called
group
and
I'm
just
going
to
have
kubernetes
manage
the
the
password
for
for
us
here
but
again
telling
it
where
what
name
space
to
look
in
and
what
cluster
name
to
create
the
user
in.
C
So
if
I
do
that,
you
can
see
created
our
user
there,
and
so
now
we
have
a
postgres
cluster
or
a
posters
cluster.
We
have
a
user
created
in
our
space,
and
so
now
we
need
some
data
in
that
space.
Now,
as
steve
said,
we're
not
going
to
go
through
all
the
painstaking
steps
of
actually
downloading
shape
files
and
loading
in
we
do
have
a
database
backup
pre-staged.
C
So
I'm
going
to
switch
down
here
real,
quick
and
switch
to
the
demo
project.
Just
so
I
can
stay
logged
into
pgo
up
here
and
then
you
can
see
now.
We've
got
a
primary
class
or
a
primary
database
running,
and
then
we
have
a
backup
still
in
the
process
of
being
spun
up
here.
So
I'm
going
to
go
ahead
and
remote
into
this
one.
C
So
now
I'm
actually
in
the
inside
the
pod
in
in
openshift.
So
I'm
in
this
pod-
and
I
want
to
load
data
into
that
fire
data
database
that
we
created
there.
So
I'm
just
going
to
go
ahead
and
issue
a
p
sql
command.
So,
like
I
said,
we've
got
this
this
this
postgres
backup
prestage.
Then
amazon
s3,
that's
going
to
pull
it
down
and
it's
going
to
load
it
directly
into
the
database
in
real
time.
So
you
can
see
just
that
quickly,
pulling
it
in
and
restoring
the
database
backup.
C
And
we're
done
there
so
now
we
have,
we
have
a
front
end.
We
have
a
map
map
server.
We
have
the
database
created
with
the
data
in
it
and
the
postgres
extension
created
in
there
by
default
because
we
use
the
posters
container,
but
we
still
have
not
connected
the
back
end
to
the
front
end.
So
now
I'm
going
to
deploy
pg
tile
server,
pg
feature
serve
and
connect
that
to
the
database
and
when
pg
tile,
serve
and
pg
feature
serve
are
created.
C
They
automatically
expose
the
data
in
inside
the
database
and
which
those
apis
are
already
part
of
the
web
front
end.
So
when
we,
when
we
actually
connect
that
and
now
I'll
actually
jump
over
here,
so
you
can
see
where,
where
we're
at
so
real
quick,
like
I
said
so,
here's
our
here's
our
demo
front
end
that
we
actually
just
built.
So
it
looks
very
similar
to
what
I
I
showed
you
earlier
right.
There's
no
changes.
C
The
only
difference
is
we
don't
see
those
green
and
red
lines
here
right,
we've
got
the
base
map,
but
and
we've
got
the
overall
ui
framework,
but
we
don't
actually
have
the
data.
So,
just
just
to
show
that
I'm
not
don't
have
any
tricks
hitting
up
my
sleeve
here.
So.
G
For
those
who
are
wondering
the
another
name
for
that
base
map
is
the
raster
tiles.
So
when,
when
adam
was
talking
about
that
before
in
the
architecture
that
that's
basically
just
a
pod,
that's
just
serving
up
raster
images
that
mosaic
together
to
form
that
base
map
layer.
C
Yeah,
maybe
an
easier
way
to
show
it
right
is
so
here
is
the
demo
front
end
that
that
I
just
had
openshift
built
for
me
right,
like
steve,
was
saying
here's
the
raster
base
map,
and
here
is
the
one
that
we're
going
to
be
doing
the
real-time
editing
in
a
little
bit.
So
you
can
see
you
got
the
green
lines
and
the
red
lines.
Those
are
the
actual
vector
tiles
being
served
back
up
directly
from
the
database,
which
we're
going
to
actually
implement
those
services.
C
Now
and
here's
our
demo
front
end,
you
can
see
those
those
lines
aren't
there,
so,
hopefully
that
that
makes
a
little
bit
clearer
all
right,
so
jumping
back
here
go
back
to
my
view
here,
all
right,
so
I
exited
out
of
the
the
database
pod
in
this
window,
and
now
I'm
actually
going
to
deploy
pg
tile,
serve
and
pg
feature
serve
so
again
using
the
new
app
flow
we
pull
down
tile,
serve
and
again
just
that
that
quickly
it
pulls
down
the
container
image
and
to
connect
tileserv
to
the
database.
C
C
Make
it
a
little
bit
easier
because
it's
hard
for
me
to
see
at
the
bottom
as
well
so
the
first
environment
variables
I'm
going
to
set
is
I'm
going
to
set
the
username
and
password
that
pg
tileserv
is
going
to
use
and
I'm
going
to
grab
those
directly
from
the
kubernetes
secret
provided
by
openshift,
and
I
just
do
that
through
this
oc
set
environment
command.
You
can
see,
I'm
grabbing
the
postgres
secrets
and
I'm
appending
the
pg
prefix
to
it.
C
So
there'll
be
pg,
username
and
pg
password
in
there
and
I'm
adding
it
to
that
deployment
config
and
then
once
those
are
actually
set.
The
other
environment
variable
that
I
need
to
create
is
the
actual
the
database
url
environment
variable,
and
so
this
is
what
pg
tile
server
actually
uses
to
connect
to
the
postgres
database,
and
you
can
see
I'm
actually
defining
what
that
environment
variable.
C
Is
there
but
again
using
pg,
username
pg
password
grabbing
the
fire
replica,
the
fire
app
replica
service
host,
and
so
that's
the
actually
provided
from
kubernetes
and
notice,
I'm
using
the
replica.
So
when
I
created
the
postgres
cluster,
it
created
a
primary
and
it
created
a
replica
database,
I'm
actually
connecting
pg
tile
server
to
the
replica
database,
because
pg
teleserve
is
a
read-only
transaction.
It's
not
actually
doing
any
updates
to
the
database.
So
we
can
use
the
replica
service
host
on
that,
because
the
replica
database
is
a
read-only
database.
C
So
we
want
to
offload
any
sort
of
workload
from
from
the
primary
database
and
put
it
on
the
read
replica
so
we'll
go
ahead
and
create
that
so
update
the
deployment
config
and
you
can
see-
because
I
didn't
have
the
deployment
configs
completely
correct-
you
can
see
pg
tile
server
actually
aired
out
because
it
couldn't
actually
connect
to
the
database,
but
now
that
I've
actually
got
it
connected
to
the
database.
It's
completed
it's
actually
running
and
it's
up,
and
the
last
thing
I'll
need
to
do
is
just
expose
that
service.
C
So
if
I
jump
back
over
here
to
my
routes
now
we
have
pg
tileserv
and
just
to
give
you
an
idea
of
what
that
looks,
like
here's,
a
basic
ui
out
of
the
box
from
that
that
comes
with
pg
tileserv
and
we
can
go
ahead
and
click
on
this
preview.
C
C
G
G
If
this
was
just
vector
if
this
was
just
raster
tiles,
but
because
it's
vector
tiles,
we
can
actually
tag
information
to
all
the
vectors
it
allowing
us
to
not
have
to
necessarily
go
back
and
forth
between
the
web
server
and
the
database
server
just
to
get
display
of
information,
and
you
can
also
style
on
the
fly
right.
The
raster
tile
server,
the
one
that
we
had
before
those
are
raster
images.
I
think
jpegs
or
png.
Something
like
that.
G
C
Yeah
thanks
steve,
and
so
now
I
brought
up
the
demo
application
that
we're
building
in
real
time
and
by
comparison
now
you
can
see
it
looks
very
similar
to
this
the
the
application
I
brought
up
initially,
but
now
I
can.
I
can
click
on
this,
but
the
ui
doesn't.
Let
me
do
anything
and
that's
because
all
these
interactions
through
the.
B
F
We're
finding
out
that
that
the
internet
is
not
evenly
distributed
today
in
rural
wisconsin,
wherever
you
are,
the
digital
divide
exists.
F
Yes,
I
can
imagine
and
there's
much
more
work
to
do.
C
Yeah,
so
let
me
know
when
it's
caught
back
up
to
kind
of
the
base
map
there
steve.
F
Still
waiting,
that's
okay,
I
was
joking
with
steve
a
minute
ago.
Is
that
a
long
time
ago
I
I
was
doing
demos
on
openshift
and
I
was
demoing
deploying
wordpress
with
open
street
maps
and
things
like
this
and
we
there
we
go
we've
caught
up
now
and
it
we've
come
a
very,
very
long
way.
So
now
we're
on
your
screen.
Okay,
all.
C
Right,
so
what
I
would
was
starting
to
show
is
so
even
in
the
in
the
demo
ui
now
we've
got,
we've
got
the
parcel
layers
here
right
you
can.
We
can
see
the
green
lines,
the
red
lines.
We
can
even
click
on
them
to
select
them,
but
when
I
actually
try
and
use
the
interface
here
it,
it
doesn't
do
anything
right.
You
can
see
an
error
has
occurred
and
that's
because
we
haven't
deployed
pg
feature
serve
yet
pg
feature.
C
Server
is
actually
what
drives
the
interaction
or
allows
the
interaction
with
the
database,
and
that's
because
all
that
interaction
is
being
done
through
functions
that
were
created
in
the
database
and
we'll
we'll
walk
through
that.
As
I
build
up
pg
feature
serve
so
go
ahead,
I'll,
go
ahead
and
reduce
this
back
down
all
right.
Let
me.
C
All
right,
so,
first,
I'm
going
to
go
ahead
in
the
same
workflow
that
we
did
for
whoops.
Not
I
don't
want
to
expose
the
service
yet
because
it
hasn't
been
created
so
same
same
workflow
for
that
we
did
for
pg
tile
serve,
I'm
going
to
do
for
pg
feature
feature,
pg
feature
serve,
or
maybe
I'll,
just
stammer
and
stutter
a
lot,
and
that
will
take
a
lot
a
lot
more
time.
C
C
And
then
add
the
at
the
database
connection
url
as
well
and
actually
before
I
post
that
in
there
I'm
going
to
go
ahead
and
clear
this
out,
just
to
make
sure
it's
a
little
bit
easier
to
see
here.
So
the
main
difference,
though,
is
because
pg
feature
serve
is
actually
going
to
be
updating
data
in
the
database.
We
don't
want
to
connect
it
to
the
replica
service
host,
we're
connecting
it
to
the
primary.
C
So
you
can
see
that
right
there
and
that's
because
the
primary
is
what
allows
us
to
actually
make
changes
to
the
data
in
the
database.
So
pg
pg
feature
service
connected
to
our
primary
database,
pg
tile
service
connected
to
our
replica
service
host,
and
that's
actually
going
to
be
something
important
to
keep
in
mind
a
little
bit
as
we
show
the
auto
failover
workflow
between
the
two
here.
So
I
went
ahead
and
updated
the
deployment
config
there
and
then
last
but
not
least,
expose
the
service
out.
C
And
so
now,
jumping
back
over
to
this
ui,
I
can
go
to
our
routes
and
you
can
see.
We've
got
pg
feature
serve
as
an
available
route
now
bring
up.
The
pg
feature
serve
just
kind
of
stock
engineering
ui
and
if
I
go
to
collections
you're
going
to
see
the
same
layers
that
that
we
had
available
through
pg
tileserv,
these
are
being
returned
back
as
ogc
apis,
though
so
there's
they're
a
lot
more
rich,
a
lot
more
feature
dense
if
you
will-
and
we
also
have
the
ability
to
view
functions
now.
C
If
I
click
on
here,
we
don't
actually
have
any
functions
available
to
us.
Yet
we
are
going
to
create
those
in
just
a
minute
and,
as
I
said
before,
the
interaction
for
our
application
is
actually
being
done
through
functions
that
reside
in
the
database
that
are
accessible
through
pg
feature
serve,
and
so,
even
though
we
have
pg
features
sort
of
deployed
now
it
is
actually
available
through
an
api.
If
I
go
back
to
our
demo
application
here
and
refresh,
I
still
don't
have
the
needed
interaction.
C
I
can
still
click
on
these.
I
can
highlight
them,
but
when
I
try
and
do
anything,
it
doesn't
actually
do
the
update
again
because
we're
missing
those
those
couple
of
key
functions.
So
the
functions
that
we're
going
to
add
our
two
two
pretty
simple
ones,
we're
going
to
add
a
parcel
within
distance.
C
That's
going
to
be
able
that's
going
to
allow
us
to
do
our
distance
query
and
then
our
partial
set
fire
hazard
function
and
that's
what's
going
to
allow
us
to
set
the
attribute
information
in
the
database
from
the
fire
hazard
attribute
from
either
yes
to
no
or
no
to
yes,
and
so
you
can
see
here's
here's
the
code
that
is
required
to
create
those
functions,
and
so
now
I
actually
need
to
create
those
in
a
database.
C
C
The
other
thing
I
should
add
in
here,
as
so
right
now
pg
feature
serve,
has
to
look
for
the
post,
just
ftw
schema,
so
you
can
see
over
here.
That's
the
first
thing
that
happens
in
this
in
this
ddl.
Is
we
actually
create
that
schema?
Then
we
create
the
functions
within
that
schema
and
once
those
are
both
created,
we'll
be
able
to
see
them
in
the
in
the
pg
feature
serve
ui,
and
I
should
probably
stop
trying
to
multitask,
because
I
keep
messing
things
up
here,
all
right.
C
C
C
That
time
there
we
go,
it
helps
if
I
have
the
have
it
in
the
right
place
to
actually
be
able
to
write
the
file.
So
there
we
go.
We
can
see
that
pull
down
that
that
ddl
and
then
just
loaded
it
directly
in
it's
got
a
whole
bunch
of
awesome
coordinates
in
there
as
part
of
the
test,
and
so
now
going
back
over
to
our
interface
here,
let
me
go
full
screen.
C
C
Ui
now
we'll
see
that
we've
got
a
fully
functioning
interface
here,
we've
got
all
the
the
attribute
information
back
here
that
wasn't
present
before
and
I
can
go
ahead
and
hit
yes,
and
you
can
see
it
changed
to
red,
because
the
attribute
information
was
actually
updated
directly
in
the
database,
and
I
can
go
over
to
the
active
fire
notification
and
select
that,
and
now
I
can
do
my
distance
query,
we'll
do
a
different
distance.
This
time
you
can
see
I'm
getting
all
that
that
information
returned
back.
C
So
now
we
have
a
fully
functioning
web
application
and
it
was
all
built
out
and
for
real
time
that
took
me,
maybe
about
25
minutes,
including
time
to
actually
pause
and
let
the
the
system
join
up
here,
and
so
I'm
gonna
go
ahead
and
make
a
couple
more
changes
here
and
we're
going
to
do
just
another
quick
live
demonstration
on
the
failover
capability.
C
So
you
can
see
I'm
making
a
few
changes
here
and
the
reason
why
I
want
to
do
this
is
to
show
you
that
it's
actually
being
stored
off
in
real
time
in
the
backup
and
it's
going
to
be
restored
from
failure.
So
we
can
so
I
change
this.
This
parcel
this
parcel
and
this
parcel
to
red
from
green,
and
if
I
go
back
into,
let
me
shrink
back
down
here
real.
C
C
So
remember
this
view,
I'm
still
in
my
primary
postgres
pod,
and
so
I
can
see
pg
data
is
where
all
the
primary
data
is
actually
restored,
and
let
me
just
do
one
more
quick
pgo
show
up
here,
so
you
can
see.
So
here
are
my
my
databases
running
in
my
cluster.
So
we've
got
my
primary
here.
So
remember:
w7
cj2
is
the
primary
and
the
replica
is
gntsb.
C
So
I'm
going
to
go
ahead
and
intentionally
crash
my
database,
which
is
always
something
fun
to
do
in
a
live
demo,
all
right,
because
that's
that's
what
you
hope
doesn't
happen,
but
we're
going
to
go
ahead
and
do
it
just
to
show
how
auto
failover
works.
So
I'm
going
to
go
ahead
and
remove
the
pg
data
file
and.
C
And
it
said
permission
denied,
but
actually,
if
I
go
back
up
here
and
show
you
can
see
now,
our
primary
cluster
is
in
an
unknown
state,
because
I
corrupted
the
the
data
file
and
now
it
automatically
in
just
that
amount
of
time
it
switched
the
gntsb
to
primary,
so
it
actually
tore
down
and
it's
in
the
process
of
restoring
the
former
primary
back
to
a
replica,
and
you
can
see
just
that
quickly.
Now,
that's
back
up
on
replica
now
you
can
see.
Well,
obviously,
nothing
happened
here.
C
If
I
go
back
here,
all
the
data
is
still
the
same
as
we
had
it
before,
but
remember
when
I
cr
when
I
connected
pg
tile
serve,
it
was
connected
to
the
replica
service
and
pg
feature
server
was
connected
to
the
primary
service,
and
so
I
can
still
go
in
here.
Even
though
the
cluster
has
failed
over
and
the
nodes
are
not
that
or
the
pods
are
not
the
same
anymore.
I
can
still
go
in
here
and
make
updates
to
the
database.
Let
me
select
that
one.
C
Let
me
change
that
back
to
known,
and
you
can
see
the
the
interface
is
still
fully
functional,
fully
fully
working
and
just
that
quickly.
The
operator
went
ahead
and
failed
over
from
the
replica
to
the
primary,
and
there
was
no
impact
to
our
user
application.
So,
when
you're
talking
about
a
mission,
critical
application,
a
business,
critical
application
data
assuredness,
you
want
to
make
sure
your
data
is
always
up
always
available
having
a
highly
available
postgres
database
driving
your
back
end.
C
Application
is
a
critical
piece
of
that
and
really
helps
make
sure
your
your
your
application
is
always
up,
and
your
data
is
all
of
it
always
available
to
your
and
end
users,
so
steve
that
that
wraps
up
the
real
time
build
of
the
single
node
is
instance.
Do
we
want
to
switch
over
and
show
how
you
can
remote
in
using
qgis
and
doing
data
updates,
and
then
we
can
also
show
how
it's
being
replicated
between
the
two
clusters.
G
Sure
before
I
do,
that,
though,
can
I
put
you
on
the
spot
sure
do
you
remember
the
command
to
scale
up
the
replicas.
C
G
G
We
don't
actually
enable
that
from
the
most
part,
because
databases
are
re,
you
don't
want
to
be
spinning
up
replicas
and
creating
all
that
bandwidth
in
general,
but
you
can
manually,
go
manually,
go
in
and
if
you
do
this
again,
you'll
see
that
we
can
actually
now
actually
just
create
another
database
replica
on
the
fly.
G
So
if
we
were
serving
up
a
lot
of
tiles-
and
it
was
really
there-
so
it
just
created
two
more
replicas
right
yep,
so
we
just
increased
by
two
more
so
now
we're
running
three
replicas,
and
this
is
also
the
part
that
shows
how
pg
tile
serve
and
pg
feature
serve.
Running
inside
of
openshift
are
cloud
native
right,
because
this
there's
no
state
stored
in
pg
tile
server
pg
feature
serve
right.
There's
no
state
store
in
there.
All
the
state
is
in
the
database.
G
So
if
you
really
actually
need
we're
now
reading
off
of
more
replicas,
but
we
can
also
take
tileserv
and
we
can
just
go
ahead
and
into
the
yeah
just
scale
it
up,
and
it
doesn't
matter
right
like
that's.
It's
always
talking
to
the
service,
so
it'll
know
how
to
route
to
the
rep
to
the
replicas
that
are
behind
that
service,
and
now
we
have
three
and
so
they're
load
balancing
to
the
replicas.
G
In
addition
to
having
three
new
services
coming
up
right,
this
isn't
something
where
there's
state
stored
somewhere
other
than
like
there's
some
other
map
servers
out
there,
where
they
store
state
actually
in
the
application
itself
and
those
become
very
hard
to
scale
up
and
they're,
actually
quite
a
hog
in
terms
of
resource
usage.
If
you
look
at
feature
serve
right
now,
so
if
you
click
on,
do
we
have
metrics
for
them.
C
I
can
probably
yeah.
Let
me
hop
over.
G
I
think
if
you
just
click
on
the
oh
yeah,
you
can
click
on
that
too.
G
You
can
well
that's
the
larger
view,
but
if
you,
if
we
look
at
it
later,
we're
going
to
actually
be
able
to
see
the
metrics
for
the
specific
pieces
and
you
can
see
that
they're
basically
consuming
almost
no
resources
to
do
their
work,
because
they've
built
in
a
lightweight
cloud
native
format,
rather
than
a
one,
app
server
to
rule
them
all
kind
of
format.
G
So
I
just
wanted
to
make
that
point
clear.
While
adam
was
doing
his
demo,
so
I'm
gonna
switch
take
control
now,
and
so
let
me
make
sure
people
can
see.
You
can
see
my
map
absolutely
yep.
We
can
okay.
So
this
is
the
exact
same
app
exact
same
map
coming
off
of
adam's
space,
but
from
before
right-
and
you
can
see-
we've
got
the
let's
see
if
I
don't
know
if
we're
serving
in
the
exact
same.
Let's
check.
Let's
go
here
to
refresh
the
state
on
the
map
we
come
back
here.
G
This
is
the
one
of
the
demo,
because
this
is
actually
in
the
failover
cluster
that
you
know
that
replica
we
had
where
the
data
was
being
replicated
between
actually
two
distinct
open
shift
clusters.
I'm
attached
to
that
data
set
right
now
right
and
that
cluster.
This
is
the
primary
in
that
cluster.
That's
why
you
don't
see
atoms
changes
because
it
pre-configuring
all
that
cluster
change
is
not
something
we
can
do
easily
on
the
fly
I
mean
we
probably
could
have,
but
it's
in
in
the
essence
of
time
we
pre-set
that
up
beforehand.
G
So
what
I'm
going
to
show
now,
though,
is
it's
quite
often
in
gis
frameworks
that
the
users
are
desktop
application
users,
not
web
users
right
like
so
that
web
administrator
view
that
we
showed
before
that
might
be
good
for
someone
who's
not
into
the
gis
stuff,
but
for
the
gis
people
they
actually
usually
use
a
desktop
application.
So
I
have
a
desktop
application
up.
I'm
I
did
a
port
forward.
G
So
here
you
can
see
me.
Oh
it's
not
on
the
screen
here.
You
can
see
me
port
forward
to
the
primary
right.
The
primary
doesn't
have
any
letters.
There's
like
a
the
the
replica,
has
some
letters
and
another
set
of
random
letters
in
it,
so
we're
actually
talking
to
the
primary
and
I've
port
forward,
5432
to
543t
on
my
local
machine,
which
is
the
port
for
postgres
right,
so
we're
connected
to
that
move
that
back
out
of
the
way
and
then
I'll
bring
up
the
desktop
app.
G
So
inside
of
here,
I
had
already
made
a
fire
spatial
connector
right
and
that
is
actually
connecting
to
the
database
in
the
in
the
primary
right
and
we
can
see
all
everything
in
there.
I've
already
dropped
assessor
parcels
on
here.
If
you
we're
gonna,
actually
update
this
parcel
here.
This
one
was
already
fire.
So
if
I
click
on
it
and
we
look
all
the
way-
I
don't
know
if
this
is
actually
viewable
on
your
screen.
But
if
we
look
all
the
way,
let
me
see
if,
where
I
can
find
it
so
fire
hazard.
G
G
There
it
is
there,
it
is
thick
there.
It
is
so
you
can
see
it
set
to
yes,
and
if
I
go
back
to
my
map
view
quickly,
you
can
we're
looking
at
this
parcel
right
here.
So
we're
going
to
do
the
use
case,
where
there's
a
gis
analyst
a
desktop
analyst
back
at
the
the
main
spatial
processing
center
for
the
fire
department
or
for
the
county,
and
they
said.
Oh,
you
know
what
we've
gone
out
and
inspected
this
one,
and
this
parcel
is
actually
now
high
fire
potential.
G
So
what
I
can
do
here
is
I'm
actually
going
to
select
this
parcel
right
and
we
can
look
down
at
the
fire
hazard
again
and
it's
where'd.
It
go
where'd.
It
go
where
to
go.
There's
a
lot
of
attributes,
but
you
can
see
this
one
says
only
yes
portion,
so
not
the
whole
thing.
We
want
to
change
that.
So
what
we're
going
to
do
is
we've
selected
this
we
can
bring.
We
can
start
editing
on
this,
so
toggle
editing.
G
G
There,
oh
no
that's
timber,
it
always
that
one
always
catches
me
here:
fire
hazard,
so
we're
gonna
go
and
edit
it
all
right.
So
we're
editing
on
the
primary
there
we
go.
It's
all
done.
You
don't
see
anything
in
the
gis
view,
because
you
wouldn't
did
I
toggle
editing
yet
nope.
I
gotta
turn
off
the
editing,
so
that's
saved,
so
this
is
basic.
G
Toggle
editing
basically
opens
up
a
transaction
and
when
I
save
that
actually
commits
the
transaction
to
the
database
and
now,
if
I
go
back
to
my
view-
and
I
refresh
the
page-
I
mean
if
we
were
really
fancy
cool,
we
would
have
some
sort
of
web
sockets,
but
it
doesn't
happen
off
enough
to
do
web
sockets.
But
now
you
can
see
the
the
it's
gone
out
to
the
replicas
right,
because
the
replica
is
controlling
the
the
tile
serving
you
can
see
that
tile
service
now
saying
this
is
a
fire
hazard
map
all
right.
G
You
can
do
a
dual:
a
dual
functionality
or
dual
user
type
role,
given
that
we
can
have
stateless
servers
and
that
we
have
access
to
this
database
with
it's
all
its
replication
set
up
for
us
out
of
the
box,
and
so
you
can
have
desktop
people
changing
data
and
analyzing
data
and
they're,
the
only
ones
that
have
permissions
instead
of
doing
it,
maybe
through
here
or
you,
can
have
web
users,
do
it
and
it's
all
synced
up
together.
The
other
benefit
that
would
have
happened
is
in
here.
G
If
I
wanted
to
run
those
functions
in
the
ftw
pub
stw
schema,
I
could
have
actually
run
those
functions
as
a
call
inside
the
database
here.
So,
like
paul
said
in
the
beginning,
we
believe,
like
not
surprisingly,
that
the
database
is
the
center
of
all
the
knowledge,
and
this
way
any
client
that
connects
can
actually
use
the
same
functions,
change
the
same
data,
everything's
replicated
and
synced
up,
and
the
database
is
taken
care
of
by
the
operator.
G
So
with
that,
I'm
done
with
the
changes
and
now
comes
the
part
like
the
1812
overture,
where
the
cannons
go
off
at
the
end,
we're
at
hopefully
fingers
crossed
adam
was
going
to
demonstrate
failover
between
clusters.
So
let
me
adam,
you
can
take
back
the
go
ahead.
Take
back
the
presentation!
If
you
want
I'll
stop
sharing
or
you
got
it.
C
So
yeah
we're
the
the
the
1812
overture
is
going
to
be
a
little
less
dramatic.
I'm
not
going
to
demonstrate
demonstrate
the
the
full
failover,
but
what
I
I
do
want
to
demonstrate
is
just
the
the
replication
between
the
the
failovers,
because,
as
I
as
I
was
showing
before
in
a
single
cluster,
you
can
do
automatic
failover
on
a
on
a
single
pod
between
two
clusters.
It
is
a
little
bit
of
a
manual
failover
process
when
your
primary
cluster
goes
down.
You
switch
over
to
your
standby
cluster.
C
You
have
to
bring
that
down
and
then
you
have
to
promote
it
back
to
a
primary.
It
can
happen
very
quickly,
but
it's
not
automatic,
and
so-
and
you
also
want
to
be
very
deliberate
about
that,
because,
maybe
maybe
your
primary
cluster,
you
want
to
just
go
ahead
and
spend
the
time
to
bring
your
primary
cluster
back
up
your
primary
kubernetes
cluster
and
not
do
the
full
failover.
So
you
want
to
be
a
little
bit
more
deliberate
between
failover
between
clusters.
C
But
what
I
do
want
to
show
is
so
here
we're
back
in
we're
back
in
openshift,
obviously,
and
I'm
going
to
go
ahead
and
jump
to
so
instead
of
the
demo
project,
I'm
going
to
jump
down
to
the
project
that
steve
was
just
showing,
and
so
here's
where,
where
we
have
all
of
the
we
have
the
multi-cluster
version
of
it
stood
up,
you
can
see.
We've
got
our
the
same
version
of
the
demo
here,
so
I'm
going
to
go
ahead
and
clear.
C
All
these
out
make
sure
I'm
using
the
right
one,
and
so
we
have
this
version
of
the
application
running
in
the
primary
cluster.
Here's
the
the
version
running
in
the
standby.
You
can
see
there's
a
little
number
two
right
here.
That
tells
me
I'm
on
a
different
one,
but
they
they
should
look
identical
and
what
I'm
gonna
do
here
is
you
can
see
here
the
the
changes
that
steve
just
made?
I'm
gonna
go
ahead
and
change
this
back,
I'm
gonna,
say
steve.
C
C
No,
no
we're
going
to
have
over
here
back
to
our
standby
cluster.
You
can
see
these
they're
still
showing
up
as
red,
that's
because
to
replicate
over
there.
It
does
have
to
go
through
an
s3
bucket
and
it
takes
about
10
seconds
or
so
for
that
replication
to
actually
happen
in
full.
So
I'm
going
to
go
ahead
and
just
give
it
a
couple
more
clicks
and
you
can
see
already
just
in
that
little
bit
amount
of
time.
These
two
have
have
changed,
but
again
because
this
is
on
the
standby
cluster.
C
I
can't
actually
make
any
changes
to
this
one,
because
this
entire
database,
including
the
primary
one,
is
marked
as
a
standby.
So
I
we're
not
going
to
run
into
any
sort
of
transactional
conflicts
by
somebody
accidentally
going
to
this
one
and
trying
to
make
an
update
to
this
one.
It's
not
going
to!
C
Let
me
because
the
entire
database
primary
replicas
are
only
in
read
only
and
they're
they're
they're
pulling
from
the
primary
cluster
in
the
the
primary
kubernetes
instance,
but
I
still
have
the
ability
to
do
my
radius
search
here,
just
because
that's
that's
a
just
a
basic
query
there.
C
So
I
can
go
ahead
and
do
that
so
hopefully
that
that
highlights
the
ability
to
not
only
do
a
highly
available
postgres
instance
in
a
single
cluster,
a
single
kubernetes
cluster,
but
also
being
able
to
do
replication
between
kubernetes
clusters
and
again,
there's
all
sorts
of
interesting,
different,
architectural
things
that
you
can.
You
could
do
with
that
again
when
you're
replicating
at
the
database
level
and
you're
making
sure
that
all
your
data
is
consistent
available
throughout
your
architecture
and
if
any
one
part
of
your
overall
enterprise
architecture
goes
down.
C
You
have
your
data
available
in
in
different
instances
and
you
can
quickly
restore
and
bring
that
back
up
and
and
replace
your
active
application,
and
so
with
that,
I'm
going
to
go
ahead
and
bring
slides
back
up
here,
hopefully,
and
so
as
steve
mentioned.
If
you
want
to
learn
more,
you
can
always
go
to
our
website
and
learn
more
about
crunchy
postgres
for
kubernetes
and
the
documentation.
We
do
have
a
learning
portal.
C
So
if
you
go
to
crunchydata.com,
go
to
our
learning
portal
or
learn.crunchyday.com,
steve's
got
a
great
site
there,
full
of
katakota
courses,
not
only
for
the
postgres
operator,
but
a
bunch
of
other
postgres
courses.
You
can
learn
more
about
postjust,
obviously
crunchy
spatial
or
you
can
email
us
at
info
crunchydata.com.
G
No,
I
just
I,
I
hope
people
got
a
sense
of
the
way
that
we've
moved
into
a
cloud
native
architecture
for
spatial
right.
If
you
look
back
to
the
can,
I
just
show
my
slide
just
for
a
second
I'll
show,
my
slides
here
we
go.
So
if
we
look
back
at
this
architecture
diagram,
I
don't
need
to
do
the
two
different
ones
together.
G
All
of
this
is
made
possible
by
openshift
right
this
high
level
of
automation.
If
you
notice
there
was
basically
we
had
similar
commands
for
every
single
thing
we
did
right.
There
was
a
common
operating
plane
coming
through
openshift.
So
if
you
were
to
stand
up
this
cluster
yourself,
you
don't
have
to
learn
how
to
install
postgres
right.
You
don't
have
to
learn
how
to
install
pg
feature
serve.
You
don't
have
to
learn
how
to
scale
it
up.
You
don't
have
to
learn
how
to
put
the
load
balancer
in
front.
G
You
don't
have
to
do
any
of
that
stuff,
right
and
or
even
the
tile
map
server,
which
you
could
scale
up.
It's
the
same
paradigm.
You
would
use
for
anything
else
like
nginx
or
jboss
eap.
It's
all
the
same
paradigm
for
how
you
work
with
it
and
the
other
nice
part
is:
we've
got
tons
of
automation
going.
That
is
just
taken
care
of
by
the
platform
itself.
You
saw
the
power
of
the
operator
right,
so
the
operator
knew
how
to
do
failover
on
its
own.
G
We
didn't
have
to
read
through
and
configure
and
do
all
that
stuff
that
we
would
normally
have
to
do
to
set
up
primary
and
read
replicas
or
the
failover
itself.
That's
like
that's
years
worth
of
work
that
all
comes
out
of
the
operator
framework
and
what
we've
done
at
crunchy
to
build
that
postgres
operator,
and
then
you
saw
we
build
pg
feature
serve
in
pg
title
serve
to
be
cloud
native,
they're,
small,
there's,
no
state
they
scale
up.
They.
They
respect
the
way
that
kubernetes
is
expected
to
work
right.
G
You
just
set
a
couple
environment
variables
for
the
password.
You
can
actually
even
set
them.
As
in
the
deployment
config
directory,
we
built
the
configuration
of
it
to
work
with
kubernetes
and
openshift
right,
and
so
then
you
can
build
your
ui
layer,
but
all
of
it's
taken
care
of
and
you're,
not
learning
some
new
paradigm.
So
we
believe
this
is
the
way
forward.
G
I
mean
kubernetes
the
cloud
all
that
stuff
has
really
made
it
nice
for
us
to
do
as
paul
said,
post
just
for
the
win
or
for
the
web,
both
of
them
together
right,
you
can.
It
makes
it
much
easier
for
everybody
and
in
a
scalable
way,
so
that
I
just
want
to
end,
and
we
love
talking
about
it.
So
any
questions
shoot
them
our
way.
F
Absolutely
and
I'm
sure
we
will-
and
this
really
I
think
you
really
have
shown
us
the
path
forward
for
cloud
native
and
the
future
of
gis
being
a
cloud
native
serious
offering
here.
This
is
really
amazing.
F
I
know
you
know
the
internet
may
have
failed
us
a
few
times
here,
but
if
people
really
clue
into
this
replication
between
clusters
and
the
different
things
that
that
we
are
showing
here,
this
is
light
years
from
where
we
came.
You
know
jokingly
with
you
know,
plug-ins
to
different
things,
and
you
know
the
the
ability
to
drill
down
and
scale
this
stuff.
Now
is
just
amazing.
So
I
really
appreciate
you
guys
taking
the
time
here.
F
F
Yeah,
if,
if
you
want
to
just
say
where,
if
people
want
to
get
involved
in
the
open
source
side
of
this,
the
post
gis,
where
where
people
should
go
to
find
you
to
participate
in
that
community.
G
G
Exactly
which
I
just
have
qgis
of
all
the
time.
Where
is
oh?
Here's
what
I
wanted
here.
We
go
right.
So
here,
post
just
is
post
gis.
Is
it
dot
net
net?
Yes,
so
this
is
the
post
just
site
and
then
there's
a
whole
part
on
it
about
development
right.
So
here's
how
you
get
involved.
They've
got
a
nice
list,
they're
very
friendly
people
paul.
I
think
it's
paul
mainly
paul
and
regina,
that
run
it
is
that
right,
pretty
much.
E
G
G
Crunchydata
right
and
their
repositories
are:
here's
pg,
tileserv
right
on
the
front
page
right
and,
as
paul
said,
it's
really
slim
code
like
if
you
dig
into
this
that
I
think
the
biggest
thing
in
here
is
probably
the
documentation
in
hugo
right
or
the
examples
most
of
it's
happening
in
a
he's
doing
it,
because
the
database
is
so
powerful
you
can.
Basically,
this
is
just
a
very
small
shim
layer
that
just
handles
the
http
to
database
transactions
and
that's
all
it's
doing.
The
database
is
doing
all
the
work.
G
So
you
don't
have
to
do
all
this
complicated
calculations
or
writing
inside
the
code
and
then
the
other
one
is
the
exact
same
way.
This
is
the
one
that
martin
works
on,
which
is
feature
serve,
and
it's
the
same.
One
thing
we
didn't
get
to
show
that
I
think
I
would
like
to
show
since
apis
are
the
future.
If
we
go
back
to
now,
I
have
to
find
it
three
deployments,
I'm
confused.
I
mean
I'm
so
used
to
the
3.1x
openshift
interface
that
I
get
very
confused
with
this
new.
F
That
you
guys
have
going
we're
currently
running
a
customize,
your
ui
contest,
so
if
you're
into
that-
and
you
want
to
customize
it
backwards,
so
it
looks
like
the
three
3.x
console
yeah
you
can
enter
the
console.
That's.
G
What
I'll
do
I'll
just
basically
take
the
code
base
from
the
3.1?
I
customized
it
look,
so
I
I
wanted
to
go
to
the
routes.
It's
really
what
I
wanted
to
do.
I
was
trying
to
find
the
routes,
so
here's
the
feature
serve
right,
I'll
bring
up
feature
serve
again
and
that
the
thing
I
think
that's
really
cool
is
features.
G
The
gis
world
is
usually
a
couple
years
behind
catching
up
the
rest
of
the
programming
world,
and
but
this
time
it's
really
great
adam
mentioned
we're
ogc
compliant
for
feature
serve.
Ogc
is
kind
of
like
the
w3
is
it
w3c
right?
Is
that
the
working
group
for
the
web
right?
The
ogc,
is
that
for
spatial
and
they're
doing
a
whole
they're
excited
they're,
doing
a
whole
new
they're
doing
a
whole
new
configuration
of
it
of
their
rewriting
their
specs,
so
that
they're
actually
modern
web.
G
Library
that
it's
waiting
for
that
it
can't
get.
So
if
I
stop
it,
maybe
there
we
go
so
now
you
get
the
you
get
the
swagger
spec
for
the
feature
serve,
so
you
can
interrogate
it
just
like
you
would
any
other
api
right
and
it
behaves
exactly
the
same
way
and
again
it's
stateless
right.
So
it's
rastafarian
or
rest
like
in
what
it
does.
So
you
can
look
at
get
the
collection
right.
How
do
I
make
that
call?
G
G
A
day
I
say
that
just
because
you
know
there's
a
guy
in
in
the
spatial
community
sean
gillis
who
was
very
early
in
rest,
and
there
was
a
bunch
of
us
who
were
like
tracking
rest
and
sean
was
very
rustifarian,
and
I
think
restafarian
are
the
people
who
find
fall,
follow
fielding's,
original
dissertation
and
say
unless
you're
doing
the
original
dissertation
version
of
rest
you're
not
doing
rest.
Those
are
the
rastafarians
and
sean
was
one
of
those
people,
and
I
was
not.
G
I
was
just
like
I'm
just
gonna,
do
what
makes
my
life
easier
and
programming
easier
and
if
I
put
some
function,
calls
in
there
or
something
so
sue
me.
I
don't
care,
I'm
just
to
make
life
better.
F
Oh
this,
I
can
see,
there's
there's
a
debate
coming
here.
Oh
goodness,
well,
thanks
again
everybody
for
coming
today.
You
know,
as
we
said,
the
internet's
not
distributed
equally,
but
I
think
we
we
got
the
gist
of
everything
here
and
I'll
put
this
up
on
youtube
shortly.
I'll
probably
end
a
day
today,
sometime
and
you
will
see
it
and
we
can
annotate
it
put
links
into
some
of
these
things
and
if
you
share
me
share
with
me,
your
slide
deck
I'll,
also
link
that
into
it
too,
as
well.
F
So
great
work,
amazing
partners
at
crunchy
data
love.
What
you're
doing
with
both
gis
paul
so
keep
keep
doing
the
good
work
there
and
anything
we
can
do
to
promote
this,
I'm
thrilled
with
it
because
yeah,
I
don't
know
anyone
in
the
planet
who
doesn't
like
good
maps
and
that
yeah
really
and.
G
F
Yeah
yeah,
who
anyone
who
does
it?
I
don't
know
if
you
can
see
it,
but
like
I'm,
I'm
a
huge
map.
Globe
enthusiast,
I
even
have
my
light
bulb
out
there
in
the
background
is
my
is
how
hooked
I
am
on
mapping.
So
I
so
appreciate
the
work
you
guys
do
so
take
care.
We'll
talk
to
you
all
soon,
someday
we'll
see
you
all
again
soon
in
person,
hopefully,
and
so
stay
safe,
keep
doing
the
good
work
and
we'll
talk
to
you
all
soon.
Take
care
guys
thanks.