You have run into the limit of what object oriented persistence (OOP) can handle effectively.
I'm not alone in thinking so. Rocky Lhotka makes the same point (I'm looking for the references but they've escaped me for the nonce). There is a good discussion of when to use OOP and when not to use it in Martin Fowler's Patterns of Enterprise Application Architecture.
We (IdeaBlade and Rocky) love OOP when we're making active use of rich behavior associated with our data. At some point, however, we are asking OOP to do what it was never intended to do - efficient processing of large volumes of data. We're talking an extremely low ratio of logic to data.
Fortunately, DevForce has the hooks to let you step outside of the OOP paradigm when you have to do so.
You mentioned that you already have a (WCF) service that works well. You might want to use this as is. We can talk about how later. In the next section, I describe another approach.
We have a customer who had to insert 350,000 records in a single shot. First try with DF normal save took 20 - 25 minutes. Next try with raw ADO.NET took about the same ("a little faster, not much"). Thus there is no faster standard way to insert a ton of records. It's not a DF issue. The .NET standard way to insert records is one record at a time and SQL Server can only eat the records at a certain pace.
Then he went with BulkCopy (SQL Server only) - and the insert took 10 seconds!
Please note that he was using a DevForce app and DevForce mechanisms to prepare for insert. He just side-stepped DF save at the last moment by isolating the in-memory table with the huge number of inserts and using BulkCopy to jam the records into the database.
The following pseudo code is four lines approximately that look like so:
DataTable aTable = ... // a DataTable with the records to insert
System.Data.SqlClient.SqlBulkCopy bulkCopy =
bulkCopy.DestinationTableName = "high_volume_table";
Note that the BulkCopy bypasses transactions, triggers, referential integrity checks, etc. That's the price for speed ... And probably makes sense for data sources (e.g., calculation methods) that generate a ton of data.
I suppose you could always re-read and verify that the data made it but that's probably over the top. No reason for triggers. No worry about transactions (since all data are new). Just have to be careful about referential integrity.
Yes it's a hack ... but it does the job and you don't have to stray far from DevForce to do it.
I should mention that you probably want to GENERATE this data close to the data tier. Performance will be terrible if you create millions of objects on a remote client and ship them over the wire. You want to create them on the host side if at all possible. The DevForce RPC mechanism helps you initiate and control that server-side process from a remote client.
When there is virtually no human interaction - no UI to worry about - and the objective is to crunch, crunch, crunch as fast as you can, OOP may not be an acceptable approach. This different scenario is more suited to traditional procedural methodology - what Fowler calls "Transaction Script".
I trust that your application does more than just crank out tons of new records. Surely you're using other kinds of less numerous business objects, objects that will hang around for awhile, objects with complex logic, objects you display to users, objects that change at the pace of human/computer interaction. Here is where OOP comes into its own; here OOP saves time and reduces complexity.