I'm doing some research into an iPhone application idea i have had, the idea has two parts - the App it's self and an online back end. The back end will be logging onto some form of a database. The data structure will be straight forward however the iPhone app will have the potential to update and log to the back end multiple times per minute. Now, lets speculate, the app becomes successful and i have 1 000 000 iPhones updating twice per minute. Lets say each time they log they send 512 Bytes thats 1 024 000 000Bytes per minute (512 * 2 * 1 000 000) or 1 Gigabyte/min which is 1 474 560 000 000Bytes/day or 1.5Tb/day. So a lot of data being produced.
So my question is how well does core data cope with large quantities of data?
I have read some of the developer docs on it and from what i can gather the data is stored in memory until you tell it to save to disk:
What is the intentional use of this save function? Every time the application closes? every time you have x amounts of object in memory and you feel it's time free some up?
So my question is how well does core data cope with large quantities of data?
I have read some of the developer docs on it and from what i can gather the data is stored in memory until you tell it to save to disk:
Saving Changes
All changes to the objects managed by Core Data happen in memory and are transient until they are committed to disk. To commit changes to the data model to disk, simply send a save: message to the managed object context. This behavior preserves the traditional document semantics that users expect in document-based applications.
What is the intentional use of this save function? Every time the application closes? every time you have x amounts of object in memory and you feel it's time free some up?