Jan 04

Direct Connect made EZ at ArcGIS 10.1

As ESRI pushes more customers toward using Direct Connect, the biggest push back I see is the complexity of dealing with the Oracle client. Installing the full Oracle client on every client machine can be a daunting task, as well as configuring TNSNAMES entries.

Oracle recently introduced their “Instant Client” package, which greatly simplifies the whole usage of Oracle Client. ESRI has recently embraced the Instant Client, making the move to direct connect very simple. This blog post will walk you thru the process of establishing a direct connect at ArcGIS 10.1 using the Oracle Instant Client 11.2.0.3.0.

Step 1: Downloading

 

The Oracle Instant Client can be downloaded from Oracle’s website, going to Oracle.com and selecting Downloads -> Instant Client. Or directly from here:

http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html

ArcGIS 10.1 still only supports 32-bit Oracle client, so you will need to make sure you use the 32-bit version.

Note: You will need to make sure you have an Oracle account to download the software.

You will only need the Instant Client Basic Lite client (although the Basic won’t hurt, it’s just a bit bigger). You can also download the SQL Plus package if you want, but it’s not required for direct connect…

Step 2: Unzipping / Installing

 

Once you’ve download the zip file, simply unzip it into any directory. In my example, I install it into C:\Oracle. So it looks as follows:

 

Next, we need to add this directory to our PATH variable. If you’re not familiar with Environment Variables on Windows 7, you can right click Computer, and select Properties. Go to Advanced System Settings, and hit the “Environment Variables” button on the bottom.

Look in the bottom section labeled System Variables, and scroll down to “Path”. Highlight that and select “Edit”.

Add the “C:\Oracle\instantclient_11_2” directory to your path. At the front, or the back, it doesn’t matter. Select OK three or four times to get all the way out of the System Properties.

Step 3: Configure Direct Connect

At this point, we are ready to connect to Oracle. So start up ArcCatalog, and go to a new Database Connection. Fill out the dialog box with the following values:

Database Platform: Oracle

Instance: sde:oracle11g:<EZCONNECT_INFO>

Authentication Type: This is specific to your installation.

 

For those of you used to Oracle TNSNAMES, this will be a magical moment. The <EZCONNECT_INFO> is a much easier way to connect to Oracle.

  • If your Oracle listener uses the default port 1521, you simply use: server/instance
  • If your Oracle listener uses another port, you use: server:port/instance

In my case, my Oracle server is called “JTB-ORACLE”. My instance is “sde101”. So my Database Connection dialog looks like:

 

Easiest. Oracle. Connection. EVER.

 

 

 

Jul 23

Python and GeoProcessing on Linux, part 1

After reading this ESRI blog post, I decided to give this a try with ArcGIS 10.  I love playing with Linux, and I’m pretty good with Python, so this seemed like a fun experiment.

I set up a VirtualBox VM with Fedora 13 (32-bit).  Next, I installed Eclipse.  The Fedora version of Eclipse installs PyDev for you, so a separate install of that is not required.  Should you want to update PyDev, inside Eclipse go to Help->Install New Software.  Click the Add button to add a site.  The location is:  http://pydev.org/updates.  Eclipse includes PyDev 1.5.5, and the updates list 1.6.0 as being available.  I have not tried the upgrade yet, so do this at your own risk!

Next comes the ArcGIS Engine installation.  Pretty straight-forward.  Mount the CDROM (or ISO, or however you want).  You must install the EngineRT first, then the ArcObjectsSDKJava.  If you try to install the ArcObjectsSDKJava first, it will give you an error message about the Runtime, and bail. 

To install the Engine Runtime:

$>cd /media/ESRI/linux/EngineRT

$> ./Setup

 

Walk thru the wizard, which is very simple.  I installed the Single User setup.  I got a warning about my system being “not supported”, but we knew that already. 

To install the ArcObjectsSDKJava:

$>cd /media/ESRI/linux/ArcObjectsSDKJava

$> ./Setup

At the end of the ArcObjectsSDKJava, you will be asked if you want to Register your software.

ArcObjectsSDKJava Registration

Select the checkbox, and select Done.  You will then walk thru the Authorization wizard, which appears to be running on Wine (it has an XP interface!) 

Authorization Wizard

Make sure you authorize both Engine Runtime and Engine Developer Kit.  You should get your successful confirmation at the end of the process.  You can confirm your license with the Administrator window.

License Confirmation

To finish part 1 of this post, we are going to use a terminal to verify that everything is installed correctly.  Start a terminal by going to Applications->System Tools->Terminal.

In your terminal, we will source the “init_engine.sh” (assuming you use Bash).

$>source ~/arcgis/engine10.0/init_engine.sh

$>

You should not see any error messages.  Now, start a python session:

$> python
Python 2.6.5 (r265:79063, Apr 27 2010, 11:08:55)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import arcpy
>>> print arcpy.ProductInfo()
EngineGeoDB
>>>

Success!  We can also list the tools available:

>>> import arcpy
>>> tools = arcpy.ListTools("*_management")
>>> for tool in tools:
…     print tool

You should see a list of tools in the _MANAGEMENT toolbox.

There is some definite quirkiness going on, and I haven’t had time to research much of it.  But if you try this:

>>> print arcpy.Usage(tool)


unicode(string [, encoding[, errors]]) -> object

Create a new Unicode object from the given encoded string.
encoding defaults to the current default string encoding.
errors can be ‘strict’, ‘replace’ or ‘ignore’ and defaults to ‘strict’.

However, you can do this:

>>> print arcpy.Usage(tool.encode(‘utf-8’))


SynchronizeMosaicDataset_management(in_mosaic_dataset, {where_clause}, {NO_NEW_ITEMS | UPDATE_WITH_NEW_ITEMS}, {SYNC_STALE | SYNC_ALL}, {UPDATE_CELL_SIZES | NO_CELL_SIZES}, {UPDATE_BOUNDARY | NO_BOUNDARY}, {NO_OVERVIEWS | UPDATE_OVERVIEWS}, {NO_PYRAMIDS | BUILD_PYRAMIDS}, {NO_STATISTICS | CALCULATE_STATISTICS}, {NO_THUMBNAILS | BUILD_THUMBNAILS})
Synchronize selected rasters in the mosaic dataset

If you get some error messages, leave me a comment and I’ll try to answer (I got a ton of errors before I had success).

In Part 2 (soon to follow), I will document how to configure Eclipse and PyDev to work with ArcPy.

Jan 31

Running Cygwin SSH as a Service

This is not a complete installation guide, there are plenty of those available via Google.  These are my hints/tips, so I remember how to do it next time.

1)  Assuming you already installed Cygwin, along with SSH.  You need to run the config script:   /usr/bin/ssh-host-config

2) This will call CYGRUNSRV.EXE, which creates the service for you. 

3) I don’t want to run it as a local (CYGSRV) user, I want to run it as one of my existing users.  To change that, go into the service and change the logon parameters.  Stop the service.  Then run these three commands from a bash shell:

chown <user> /var/log/sshd.log

chown –R <user> /var/empty

chown <user> /etc/ssh*

Now restart the service, and it should be running as <user>.

Dec 26

Creating ISO files on Windows

WARNING:  This is a NON-GIS-related post.

In the past 6 months, I’ve started using a lot more “digital media”.  We bought a Canon Vixia HF200 HD camcorder for recording the kids.  We bought a Canon Rebel XTi digital SLR for, well, taking pictures of the kids…

With all this digital media, one thing I’ve started doing is capturing “raw ISO files” of my media.  So if I record an event on the Vixia, I will take the SDHC card out, and immediately create an ISO image of the whole card.  That way, should anything happen, I can always go back directly to the source.

I tested a lot of tools for dealing with ISO files on Windows.  FREE ISO Creator by minidvdsoft worked very well, until it started nagging me about buying it.  LCISOCreator also had some positives.  But in the end, I found that ISO Recorder V 3.1 worked the best for my workflow.

When I insert an SDHC card, I literally right click on the drive letter (H: in my case), and select “Create ISO Image File”.  I give it an output ISO path, and I’m done.  Couldn’t be any easier.

image

image

As a side note, I use SlySoft’s free Virtual Clone Drive to mount my ISO files, import my AVCHD files, and create my videos.

Both of these free pieces of software support Windows 7 x64. They should be a part of any geek’s toolkit.

Dec 19

Find recent files via python

Here is a quick little python blurb to find any files in a given directory within a certain time.  I use this in my nightly scripts to find my lastest database backups, exports, shapefiles, etc.   Hope it helps.

 

import os, datetime, stat

NOW = datetime.datetime.now()
TWELVE = datetime.timedelta(hours=12)
FORMAT = "%a %b %d %Y %H:%M:%S"
DIR = "D:/My/Full/Path/"

for x in os.listdir(DIR):
    int_time = os.stat(DIR + x)[stat.ST_CTIME]
    if datetime.datetime.fromtimestamp(int_time) > (NOW – TWELVE):
            # Do something with x here (ie, copy it somewhere)
            print x + " –> " + datetime.datetime.fromtimestamp(int_time).strftime(FORMAT)

Mar 26

ESRI Developer Summit 2009

I am finishing up my trip to Palm Springs for the ESRI Developer Summit.  Fantastic week, learned a LOT, met several new technical contacts and had an enjoyable time.  These are my Dev Summit highlights.

  • The biggest news I got out of the summit was that the GDB schema in ArcSDE 9.4 is going from 35+ tables down to 4.  They are going to essentially use an Open XML schema to describe all the GDB objects (object classes, feature classes, feature datasets, relationships, domains, etc.).  This is going to break a lot of existing sites that read those GDB tables (yes, even those ESRI tells us not to do it, we still do). 
  • I met with the GDB team Wednesday morning, and they confirmed that they are going to use XML field types, rather than BLOB.  This alleviates my biggest fear.  Next fear becomes scalability.  ESRI already requires a lot of exclusive locks for doing most anything in ArcCatalog.  Is only having 4 tables going to place additional locks on those tables and require MORE exclusive locks?  Let’s hope not.
  • ArcGIS Explorer 900 is going to be fantastic.  If you haven’t seen a demo of it yet, you should.  I can’t help but wonder how many ArcView licenses this will steal away.  It’s a free product with a LOT of functionality.  Beta release coming soon.
  • Lots of emphasis on python this year.  At 9.4, ESRI is replacing the “command line” in ArcGIS with a full python interpreter.  Nifty.
  • Had lunch on Tuesday with a group of GDB people, the main topic was Managing the GDB.  Specifically, how to deal with data modeling and multiple geodatabases.  ESRI has assigned a resource to deal with this problem, if you want his contact info so you can have some input, then leave a comment for me with your email, and I will forward you his email.
  • One issue I’ve dealt with in the past is that if you use an Oracle Export against ST_GEOMETRY, the resulting IMPORT has issues with indexes.  ESRI KB Article 34342 deals with this.
  • Spatially clustering data storage of ST_GEOMETRY tables can provide significant performance improvements.  ESRI KB Article 32423 gives a procedure for clustering your data.

These are my initial, still-in-Palm-Springs highlights.  Will clean them up a bit when I get back to Charlotte.

Oct 22

SQL Server 2008 Spatial Indexes and ArcSDE 9.3

First, my apologies for the long delay between posts. Most of my spare time has been in researching which of our dead-beat candidates for President I will vote for on Nov 4. But I won’t get into that right now… I have recently been working on a new project with SQL Server 2008 using their new native Spatial Data type, and ArcSDE 9.3. Everything seems to work well with this datatype, and it is very easy to load data with ArcCatalog. A simple change to the default DBTUNE parameter for GEOMETRY_STORAGE and you’re ready to load data.

update sde.sde_dbtune set config_string = ‘GEOMETRY’ where keyword = DEFAULTS’ and parameter_name = ‘GEOMETRY_STORAGE’

Once I loaded in a lot of data, however, I noticed the spatial query performance was very, very slow. For example, doing an identify on a point layer with 150,000 records took ~6 seconds. Unacceptable. So, I put on my Sherlock Holmes hat, and went to work finding out why it was so inefficient.

First search found this article: http://webhelp.esri.com/arcgisdesktop/9.3/index.cfm?TopicName=Using_the_Microsoft_spatial_types

Another good article that I found: http://msdn.microsoft.com/en-us/library/bb964712.aspx

These gave me some good background on the spatial type, and specifically their spatial indexes. So I experimented with different level depths (high, medium, and low). I tried different CELLS_PER_OBJECT values. The latter provided a minor improvement, but performance was still unacceptable.

Re-reading the ESRI docs from the link above, I found this quote:

When creating a layer with a geometry or geography spatial column through ArcGIS, the bounding box of the feature class is calculated as the extent of the data that is to be indexed. Any features falling outside of this range will not be indexed but will still be returned in spatial queries. If the layer extent is not set, the maximum range of coordinates for the layer’s spatial reference system will be used for the bounding box. Whenever the layer is switched from load only I/O mode to normal I/O mode, the bounding box is adjusted with the latest layer extent.

So I tried resetting the layer envelope for the spatial index. This was the hidden gem I was searching for. By default, the spatial index was built with massive extents (the whole min/max from the layers’ spatial reference). I used “sdelayer -o describe_long” on my layer, and got the true extents.

sdelayer -o describe_long -l easement,shape -u port -i sde:sqlserver:localhost -D gisdb

Gave the following output:

Layer Envelope …….:
minx: 7570912.98584, miny: 688242.68201
maxx: 7719037.56738, maxy: 720956.74481

Then I modified the spatial index using Sql Server’s Managment Studio and my new extents. To do this manually, you go to the table in SSMS. Go to the list of Indexes for the table, and find the spatial index. Right click, and select Properties.

Under “Select a Page”, go to the Spatial page. You will see the section for the Bounding Box. Simply copy/paste your values from the “sdelayer -o describe_long” output into these values, and hit OK. The progess will indicate that it is rebuilding your spatial index, and when it’s finished, your layer should perform dramatically better.

Here’s my index properties before:

And after:

Lastly, I decided doing 200+ layers this way was also unacceptable. So I built a T-SQL script that automatically regenerates all the spatial indexes using the bounding box from the SDE_LAYERS table. You can download my script from ESRI’s ArcScripts website:

http://arcscripts.esri.com/details.asp?dbid=15869

The ESRI documentation seems to imply that altering the layer from normal_io to load_only_io and back again will also potentially fix this problem. So I guess another fix would be to write a script that changes every layer to load-only and back again, but I chose this method. I got to learn a lot more about the spatial indexes this way.

EDIT: I have since tried the method of using “sdelayer -o load_only_io” followed by immediately putting it back in normal_io mode (“sdelayer -o normal_io”). This indeed rebuilds the spatial indexes with a better extent as well. So there are a variety of work-arounds to the Out Of The Box spatial index issues with SDE/SQL 2008.

Hope this was helpful.

Jul 09

Open Source GIS Stack

In my last post, I talked about how easy the PostgreSQL/PostGIS installation was. The ArcSDE 9.3 media even includes the correct version of PostgreSQL to install for use with SDE. One note: if you want to enable PostGIS support, SDE 9.3 only supports PostGIS version 1.3.2, NOT the latest 1.3.3. Make sure you only install the 1.3.2 version if you want to use PostGIS for data storage.

Once I loaded PostgreSQL and enabled the PostGIS support, I simply change the default geometry storage type in my DBTUNE table to “PG_GEOMETRY”. Then I copied/pasted my sample dataset into PostGIS using ArcCatalog. My sample dataset is a “small” electric utility dataset, with approximately 120,000 primary_oh lines, and 60,000 transformers. The copy/paste went off without a hitch, and I have a very usable dataset in PostGIS. Now, let’s see what we can do in a command prompt with our coordinates:


sde=# select st_astext(shape) from primary_oh
sde-# where objectid = 856253;


st_astext
-------------------------------------------------
LINESTRING(1012108.7600746 1045855.35055982,

1012173.61937546 1045822.20342262,

1012230.98826477 1045792.88789014)
(1 row)

Cool, I can get the coordinates. Now, for even more fun. Let’s find if there are any Transformers attached to that piece of Primary_OH:


sde=# select objectid, st_astext(shape) from transformer
sde-#
where st_intersects(transformer.shape, (select
sde(# shape from primary_oh where objectid = 856253));

objectid | st_astext
---------+------------------------------------------
529532 | POINT(1012108.7600746 1045855.350559
82)
479413 | POINT(1012230.98826477 1045792.88789014)

(2 rows)


Checking my map, that is indeed the correct answer. Yay! It works as it should.

Now that I have my data loaded into PostgreSQL/PostGIS, it was time to find out if other (non-ESRI) software could interact with the data. First test: Quantum GIS (http://qgis.org). Quantum GIS is an open source cross-platform GIS that runs on Linux, Windows, and OS X. I was able to load my PostGIS data straight in without any problems, and was very impressed with how fast it draws my data.

Great, we have a cross platform desktop application that will read/write my PostGIS data. But how can I get this data onto the web? Let’s try Geoserver (http://geoserver.org). After downloading and installing Geoserver, the steps to get Geoserver working with my data were very straightforward:

  • Open the Geoserver administration webpage (http://localhost:8080/geoserver)
  • Go to Config, then Data
  • Create a Namespace for my data;
  • Back on the Config->Data menu, now add a DataStore for my PostGIS data;
  • Back on the Config->Data menu, go to FeatureTypes. Enable all the features you want to add to your published map.
  • Lastly, I wanted to use WMS to publish my map. So, go to Config->WMS and set up a group layer for my new data.

I made a quick modification to the OL_DEMO.HTML sample file, and looky-looky. I can now draw my data over top of Google Earth, MS Virtual Earth or Yahoo Satellite data. Most impressive how easy it was to publish data to the web, considering I am a database guy.


ESRI, I hope you’re taking notice of this stack. Because I’ll bet some of your customers will be.

Jun 28

ArcSDE 9.3

Well, I have started doing some testing on ArcGIS 9.3.

First up: let’s try PostgreSQL, since I’ve never touched it before. Should be interesting. So I visit http://www.postgresql.org, and start poking around on how I install this thing. What version do I need? No clue. I see one entry that says there is a third party distribution called PostgresPlus, which has the most common options, and is easy to install. So in my best Jim Carrey voice, “Alllllrighty then, PostgresPlus it is.”

So I get it installed, create a database, apply the PostGiS extensions to it, create my SDE user and database, and I’m all ready for testing. I then go to load up SDE on it, and lo-and-behold, the SDE 9.3 installation INCLUDES the PostgreSQL software itself. Nicely played, ESRI. Nicely played.

Now I uninstall the PostgresPlus, and start over.

Everything said and done, the whole installation was very straight forward. So I copied a small sample dataset in, and voila! I can query the shape records via command line, in the pgsql prompt. I chose to use the ST_GEOMETRY storage rather than the POSTGIS storage. I think ST_GEOMETRY is going to take off now, so I better get a little more acquainted with it.

I was planning on posting screen shots of the install, but it was so easy that it is not warranted. I will just sing the praises of how easy it all was. If I try this install on my Linux box, I will definitely post more technical details on it.

I wonder how many people will actually move to this option now. The cost is definitely better than the Oracle or SQL Server options. Still wondering how the OpenSource community will feel about ESRI charging $10K for a license to read/write spatial data into a free database. Hmmmm.

Next up: Oracle 11g and ArcSDE 9.3.

May 08

oracle 10.2.0.4.0 and SDE

With the release of Oracle 10.2.0.4.0, I have begun full-scale testing.

I know ESRI recommends staying away from this patch, at least if you are using ST_GEOMETRY and/or Oracle Spatial data (http://forums.esri.com/Thread.asp?c=158&f=2291&t=250200). Most of my sites are not using ST_GEOMETRY, so I want to test this release.

This patch fixes a couple of Oracle bugs in 10.2.0.3.0 that I have run up against, so hopefully things will stabilize. Most of my sites who have upgraded to Oracle 10g have seen very significant performance increases. One site in particular has seen mostly performance increases, but their reconcile times have decreased by a couple hundred percent. Not good, hopefully fixed with these patches. Will post more testing results as I get them.

Two “hidden” Oracle parameters we’ve started testing with 10G and SDE are:

“_optimizer_cartesian_enabled” — This defaults to TRUE. When set to FALSE, it won’t allow the optimizer to use an expensive cartesian join. For SDE, this so far appears to be a good setting. We’ll keep an eye on it.

“_optimizer_mjc_enabled” — Similar to the parameter above, this also defaults to TRUE, and when set to FALSE this one prohibits the optimizer from using merge join cartesians.

Here’s a specific example of an MJC that we want to eliminate:


Rows Row Source Operation Object Id
0
0
0
0
0
21,363,880
628
21,363,880
35,434
0
117
758,931
117
0
0
0
0
0
1,241
0
0
0
0
0
0
0
VIEW (cr=44,254,208 pr=38 pw=0 time=249.8493s)
HASH UNIQUE (cr=44,254,208 pr=38 pw=0 time=249.8493s)
FILTER (cr=44,254,208 pr=38 pw=0 time=249.8489s)
NESTED LOOPS (cr=44,254,208 pr=38 pw=0 time=249.8489s)
NESTED LOOPS (cr=44,254,208 pr=38 pw=0 time=249.8489s)
MERGE JOIN CARTESIAN (cr=68 pr=4 pw=0 time=21.4044s)
INDEX RANGE SCAN LINEAGES_PK (cr=7 pr=0 pw=0 time=0.0031s)
BUFFER SORT (cr=61 pr=4 pw=0 time=0.0422s)
INDEX RANGE SCAN D170_PK (cr=61 pr=4 pw=0 time=0.0000s)
INDEX UNIQUE SCAN A170_PK (cr=44,254,140 pr=34 pw=0 time=225.8974s)
NESTED LOOPS (cr=1,522,863 pr=0 pw=0 time=7.7255s)
INDEX RANGE SCAN LINEAGES_PK (cr=5,001 pr=0 pw=0 time=0.0247s)
INDEX UNIQUE SCAN D170_PK (cr=1,517,862 pr=0 pw=0 time=7.0464s)
FILTER (cr=3,516 pr=34 pw=0 time=0.0465s)
FILTER (cr=3,516 pr=34 pw=0 time=0.0448s)
NESTED LOOPS OUTER (cr=3,516 pr=34 pw=0 time=0.0436s)
NESTED LOOPS (cr=3,516 pr=34 pw=0 time=0.0416s)
TABLE ACCESS BY INDEX ROWID D170 (cr=3,516 pr=34 pw=0 time=0.0399s)
INDEX RANGE SCAN D170_IDX1 (cr=2,276 pr=34 pw=0 time=0.0279s)
INDEX UNIQUE SCAN LINEAGES_PK (cr=0 pr=0 pw=0 time=0.0000s)
VIEW (cr=0 pr=0 pw=0 time=0.0000s)
FILTER (cr=0 pr=0 pw=0 time=0.0000s)
NESTED LOOPS (cr=0 pr=0 pw=0 time=0.0000s)
INDEX RANGE SCAN A170_PK (cr=0 pr=0 pw=0 time=0.0000s)
INDEX UNIQUE SCAN LINEAGES_PK (cr=0 pr=0 pw=0 time=0.0000s)
INDEX UNIQUE SCAN LINEAGES_PK (cr=0 pr=0 pw=0 time=0.0000s)






26079

833073
1215760

26079
833073




833071
1215526
26079



1215760
26079
26079

YUK!

And so far, we haven’t seen any ugly execution plans like this since we change the two settings. Fingers crossed.

Older posts «

» Newer posts