I think AIMS2013 do has some advantages compared with Mapguide 6.5 old version. But the speed to render a map is our concern, especially we have heavy data.
In our situation, each of a map includes minimum 120 layers, total we have 1800 maps( I use 8 links to point to these layout/map resources for internal and external users dynamically), even it just takes 8 hours for caching one map, it still takes 2 years to cache all the data. So I prefer no cache. Then usually it takes half minute to zoom in/out. Additional factor is we have to update some of spatial data once a month, caching is not our first option. Any suggestions for this situation? Thank you.
Here's a general checklist:
The new profiler in AIMS2013 should tell you what Layers/Feature Sources need to be investigated
Thanks all above suggestion comments.
Jackie, most of our data is shapfiles stored in PostgreSQL/PostGIS (a few raster data). Because we already decide use AIMS, we have to improve the rendering speed. So far, we can do the following things to improve a little bit:
Any other suggestions?
Notice how your Google Maps, Bing and OpenStreetMap load in square by square?
That's because they're tiled maps, pre-baked and ready to be served through a series of load-distributed servers. So if someone asks for "Google Maps speed", they are asking for tiled maps (that AIMS can do) whether they are aware of it or not.
The perceived performance of MG 6.5 is because all the hard work is off-loaded to that filthy, IE-only, black box, ActiveX viewer control. It's 2013, such black box, Microsoft-only technology does not belong here.
AIMS is not MG 6.5. Don't look at AIMS with a MG 6.5 mindset.
Okay, since you're housing most of your spatial data within a RDBMS, then the full suite of DBMS optimizations are available to you, that AIMS doesn't and won't need to be concerned about.
This is venturing into DBA territory, but a general list would be.
* Turn on trace logging in the PostgreSQL FDO provider (link) and see what gnarly SQL the FDO provider is sending off to postgres.
* Use EXPLAIN / query execution plans provided by postgres/pgadmin to give you a better idea how such SQL queries can be improved through proper indexing.
* Indexing specific columns to improve queries that have filters.
If this indexing stuff is a bit over your head, there's a site that explains it better than I can:
Using Base Layer Group, add multiple connection to RDBMS for each of heavy data group also can improve the rendering speed dramatically. Thx for Jackie solution.
We've been working with AIMS 2013 for about a year and have been live for just over two months so I figured it's time to share some things we've learned for other's benefit. I work with swimming123 so I figure I should elaborate on his point above about postgres.
I assume it is the nature of the FDO provider but each data connection you make to a PostgreSQL database acts as one connection to the database. So if you have only built one data connection everything is being piped through one connection which will act as a significant bottleneck. By giving your heavier data its own data connection (even though its just a duplicate resource) you will allow it to be queried simultaneously to your other resources. Now, there are limitations to the number of connections that make sense to a postgres database (We are moving the DB to a second server and setting up connection pooling over the next few weeks), but you are doing yourself a tremendous disservice by using one.
Log into access your profile, ask and answer questions, share ideas and more. Haven't signed up yet? Register