Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

GorillaPaws

macrumors 6502a
Original poster
Oct 26, 2003
932
8
Richmond, VA
I was wondering if there is a good resource such as a checklist for anticipating the various failure cases of common Cocoa operations and tasks.

I'm just starting to learn a bit about Unit Testing, and it seems like the most critical dimension in determining how useful the tests will be is in the programmer's ability to anticipate the ways in which their code could possibly fail. I realize this is a skill that is learned over time through lots of experience and becomes automatic, but are there resources out there to help develop this skill?

Some of example cases that quickly come to mind are:
  1. Handling nil and NULL cases
  2. Fencepost errors
  3. Unavailable resources (memory, diskspace, missing files etc.)
  4. Handling odd Unicode characters (e.g. Kanji)
  5. Very large and small numbers as well as 0
  6. Errors resulting from someone subclassing your class
 

gnasher729

Suspended
Nov 25, 2005
17,980
5,565
I was wondering if there is a good resource such as a checklist for anticipating the various failure cases of common Cocoa operations and tasks.

I'm just starting to learn a bit about Unit Testing, and it seems like the most critical dimension in determining how useful the tests will be is in the programmer's ability to anticipate the ways in which their code could possibly fail. I realize this is a skill that is learned over time through lots of experience and becomes automatic, but are there resources out there to help develop this skill?

Some of example cases that quickly come to mind are:
  1. Handling nil and NULL cases
  2. Fencepost errors
  3. Unavailable resources (memory, diskspace, missing files etc.)
  4. Handling odd Unicode characters (e.g. Kanji)
  5. Very large and small numbers as well as 0
  6. Errors resulting from someone subclassing your class

1. There are no "odd" Unicode characters.
2. Errors introduced by someone subclassing your class are not your problem.
3. All methods return 0 / NO / nil / 0.0 when sent to a nil object. You should design your methods so that this is the correct reply. That saves a lot of error handling in your code.
4. On a 64 bit system, you don't run out of memory. Your machine will slow down to a crawl and you user will give up in disgust _long_ before you run out of memory.
5. If files in your application package are missing, not your problem.

6. I examine _all_ input data and replace anything that is invalid silently with something that is valid. I also replace every unreasonably large number with something that is very large but small enough to avoid problems. That works very well with 64 bit integers, because usually you don't need anything near the full range. I avoid unsigned integers because simple operations like x-1 give unexpected results. 64 bit + NSInteger is a good combination.

7. I'm aggressive in detecting bugs. If passing NULL is an error then I don't check for it - it will be detected when the code is run, and then the bug is fixed. By definition, Objective-C exceptions are _always_ programming errors so there is always a breakpoint set for them. You don't write code to work around bugs, you write code that is bug free.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,751
8,425
A sea of green
I think it's useful to have an overarching strategy, from which more specific tactics can be derived as needed. I've found these to be useful and concise expressions of strategy:
1. Confirm assumptions.
2. Describe expectations.

"Confirm assumptions" leads to tactics like bounds-checking, use of assertions, use of exceptions, and other sanity checks. Assertions, in particular, are valuable because they are conditionally compiled, so you can turn them on and off as needed. I usually leave them on until the last possible moment.

"Describe expectations" leads to informative error messages (e.g. showing the path of the file that failed, rather than just saying "Can't open file"), and the placement of exceptions closer to the point of actual failure, rather than at some distant place where a previous unchecked failure finally catches up to you.

The idea then is you look at a chunk of code and find the places to confirm assumptions, and put assertions there. You write the assertion messages to describe expectations as well as the nature of the failed assertion. You continue in this vein throughout the code. Someone reading the code should be able to easily tell what the assertions mean and what the expected conditions are.


The other thing you have to concern yourself with is test coverage: do the tests cover all the executable code?
 

MorphingDragon

macrumors 603
Mar 27, 2009
5,160
6
The World Inbetween
I was wondering if there is a good resource such as a checklist for anticipating the various failure cases of common Cocoa operations and tasks.

I'm just starting to learn a bit about Unit Testing, and it seems like the most critical dimension in determining how useful the tests will be is in the programmer's ability to anticipate the ways in which their code could possibly fail. I realize this is a skill that is learned over time through lots of experience and becomes automatic, but are there resources out there to help develop this skill?

Some of example cases that quickly come to mind are:
  1. Handling nil and NULL cases
  2. Fencepost errors
  3. Unavailable resources (memory, diskspace, missing files etc.)
  4. Handling odd Unicode characters (e.g. Kanji)
  5. Very large and small numbers as well as 0
  6. Errors resulting from someone subclassing your class

I know you're probably designing unit tests for existing programs, but during my internship we wrote unit tests before implementing any code whatsoever. What I suggest is making a small program that is completely defined and documented with unit tests before implementing any code. You might not do this in real life, but it will help you understand what you need to test for. It helped me. :)
 

chown33

Moderator
Staff member
Aug 9, 2009
10,751
8,425
A sea of green
I know you're probably designing unit tests for existing programs, but during my internship we wrote unit tests before implementing any code whatsoever. What I suggest is making a small program that is completely defined and documented with unit tests before implementing any code.

Serious question: How do you test your tests?

In other words, how do you do the following:
1. Ensure the tests do not contain bugs.
2. Ensure the tests actually test what they claim to test.

And if this involves mock objects, then how do you test the mock objects? How do you maintain them effectively?
http://en.wikipedia.org/wiki/Mock_object#Limitations
"Over-use of mock objects as part of a suite of unit tests can result in a dramatic increase in the amount of maintenance that needs to be performed on the tests themselves during system evolution as refactoring takes place."
...
"Mock objects have to accurately model the behavior of the object they are mocking, which can be difficult to achieve if the object being mocked comes from another developer or project or if it has not even been written yet." (underline added)


I'm asking because I've seen TDD bring progress on a project to its knees, even though the lead practitioner claimed considerable experience with the process. It seemed to me there was some judgement impairment somewhere, but I was unable to identify the specific cause. It was like the aircraft crash cause "controlled flight into terrain", only in slow motion.
http://en.wikipedia.org/wiki/Controlled_flight_into_terrain
 
Last edited:

GorillaPaws

macrumors 6502a
Original poster
Oct 26, 2003
932
8
Richmond, VA
@gnasher729, @chown33, and @MorphingDragon, Thanks for taking the time to respond. I had a few follow-up questions.

5. If files in your application package are missing, not your problem.

I was referring to expected user data (e.g. a missing user defaults .plist, or a resource located on an unavailable remote connection etc.)

The points you've made were helpful and are greatly appreciated. Hopefully others will benefit from your advice as well.

"Confirm assumptions" leads to tactics like bounds-checking, use of assertions, use of exceptions, and other sanity checks. Assertions, in particular, are valuable because they are conditionally compiled, so you can turn them on and off as needed. I usually leave them on until the last possible moment.

Do you use NSAssert in your production code if you're also using STAssert in your unit test methods? Is that redundant or just a technique for being very clear about the assumptions your code is making? Perhaps I misunderstood your meaning here?

I printed out "Confirm assumptions, Describe expectations" and tacked it on my wall.

The idea then is you look at a chunk of code and find the places to confirm assumptions, and put assertions there. You write the assertion messages to describe expectations as well as the nature of the failed assertion. You continue in this vein throughout the code. Someone reading the code should be able to easily tell what the assertions mean and what the expected conditions are.

When you write code, are you inserting these checks as you go, do you write a method, then add in the checks and assertions (like one might proofread their essay before submission), or do you have some other technique (such as test driven development)?

The other thing you have to concern yourself with is test coverage: do the tests cover all the executable code?
Do you have advice about how much coverage is optimal based on your experiences? Model objects, and worker objects seem like the most obvious and straightforward candidates. Controllers are trickier because you're going to run into the mock object problem that you raised.

I'm still trying to figure out if unit testing is worth the cost, and if so what are the best practices for incorporating it into the development workflow.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,751
8,425
A sea of green
I was referring to expected user data (e.g. a missing user defaults .plist, or a resource located on an unavailable remote connection etc.)
A missing user defaults plist is something your code should always be prepared to handle sensibly. So should an unavailable remove resource. Malfunctioning under such common scenarios is unacceptable.

If Safari or Firefox became hopelessly lost or crashed from something as simple as missing defaults or unreachable google page, they would rightly be considered as egregiously defective.


Do you use NSAssert in your production code if you're also using STAssert in your unit test methods? Is that redundant or just a technique for being very clear about the assumptions your code is making? Perhaps I misunderstood your meaning here?

Both. NSAssert is in the actual production code. It's in the internal implementation, where unit-tests can't reach. So there might be several assertions (NSAssert) in a method, which contribute to building up the eventual result that can be externally tested.

BTW, you can use NSAssert without unit-testing, and vice versa. Neither one is an all-or-nothing proposition.

When you write code, are you inserting these checks as you go, do you write a method, then add in the checks and assertions (like one might proofread their essay before submission), or do you have some other technique (such as test driven development)?
It depends. At different times, all the above.

If I'm experimenting to find out a good way to design something, I might have only a few assertions, which are more like milestone markers that I can use to make sure things are working as expected at certain points during the experiment. During the experiment I'll make tests or "exercisers" that help gather data for making a decision from the experimental code.

After I've decided on a path to take with the design or implementation, assertions confirm that what is expected is what happens. During testing, I may see a reason to go back and add internal assertions, or move them around, or transition them to test-resident assertions (and not necessarily STAssert).

Do you have advice about how much coverage is optimal based on your experiences? Model objects, and worker objects seem like the most obvious and straightforward candidates. Controllers are trickier because you're going to run into the mock object problem that you raised.
100% coverage is best. It never happens in real life without a big investment in mockups and other testing scenarios, including stress-testing, fault injection, fuzzing, etc.. "Optimal" depends on the project: risks, costs, payoffs, time, etc.

The main thing with test coverage is to know how much you're getting. If you don't know what coverage you're getting, you have no way to tell if it's optimal (for whatever value of "optimal" is optimal for the project).
 

GorillaPaws

macrumors 6502a
Original poster
Oct 26, 2003
932
8
Richmond, VA
@chown33 Thanks for clarifying about your use of NSAssert with unit tests. I found your post very helpful, particularly when describing your process. I think this is an aspect of coding that is often ignored in books, blogs and documentation (test driven development and pair programming being exceptions).
 

MorphingDragon

macrumors 603
Mar 27, 2009
5,160
6
The World Inbetween
Serious question: How do you test your tests?

In other words, how do you do the following:
1. Ensure the tests do not contain bugs.
2. Ensure the tests actually test what they claim to test.

And if this involves mock objects, then how do you test the mock objects? How do you maintain them effectively?
http://en.wikipedia.org/wiki/Mock_object#Limitations
"Over-use of mock objects as part of a suite of unit tests can result in a dramatic increase in the amount of maintenance that needs to be performed on the tests themselves during system evolution as refactoring takes place."
...
"Mock objects have to accurately model the behavior of the object they are mocking, which can be difficult to achieve if the object being mocked comes from another developer or project or if it has not even been written yet." (underline added)


I'm asking because I've seen TDD bring progress on a project to its knees, even though the lead practitioner claimed considerable experience with the process. It seemed to me there was some judgement impairment somewhere, but I was unable to identify the specific cause. It was like the aircraft crash cause "controlled flight into terrain", only in slow motion.
http://en.wikipedia.org/wiki/Controlled_flight_into_terrain

At my internship, we used a mixture of mathematical proofs, asserts and mock objects while designing the tests. I didn't seem to notice a slowdown in development speed, but I haven't worked on many big projects.

The reasoning I've been told for test design is that tests debug the code and code debugs the tests.

I've always been a bit suspect of this reasoning, but tests make me think about the functionality, documentation and use cases before implementation. It also can tell me when something major breaks. I write tests for these reasons, but I don't necessarily follow TDD philosophies.

My University seems to be a big advocate of TDD, Agile and XP methods so I get to learn how to use them whether or not its the best solution.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.