Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Java or C?

  • Java

    Votes: 22 34.9%
  • C

    Votes: 41 65.1%

  • Total voters
    63
Status
Not open for further replies.
Referring back to the fact the choice you make may be determined by supply (only java or C classes offered), versus demand, I'd still say C, all things being equal. However, like someone else said earlier, if the instructor for Java is clearly superior, go for that. In a classroom setting, instructor is everything.

If the hands-down best instructor is teaching COBOL, stick a fork in your hand and sign up for that (don't worry, this won't happen).
 
I'm a huge fan of both, and others. I started in C lo these many years ago and then picked up C++, Java, Objective C, etc. What worked for me was learning one language really well (in my case C) and this in turn made moving between other languages a piece of cake (well sort of :D).

C is great, as others have pointed out, because you're forced to get into the nitty gritty of memory management, file and string manipulation and other components that get hidden behind the higher level frameworks available in, for instance, Java. You'll always come out on top if you have a grasp of the "magic" that goes on behind ARC, garbage collection and higher level string packages and libraries available.
 
Easy — 30. 0-based math isn't hard.

Sure, but you had to think about it. And if you had accidentally typed i<=30 instead of i<30, you're now executing 31 times. On the other hand, a Ruby statement like "30.times" will execute, well, 30 times.

This is a trivial example, though. Let's say you have a function that collects sensor data and stuffs it into an array of indeterminate length. After collecting the data, you then want to print out the results, one per line.

In C, your code would look something like this:

Code:
	float data_array[ARRAY_SIZE] ; /* The array that will hold our sensor data */
	int sensor_measurements ; /* Holds the return value of collect_sensor_data(), which indicates how many measurements were collected */
	int i;
	sensor_measurements = collect_sensor_data(data_array, ARRAY_SIZE) ; /* collect_sensor_data() takes the maximum number of measurements so we don't overrun our buffer */
	for(i=0 ; i < sensor_measurements; i++) {
		printf("Measurement %i: %f\n", i+1, data_array[i]);
	}

The equivalent code in Ruby would be:

Code:
sensor_data().each_with_index{ |value, index| puts "Measurement #{index + 1}: #{value}" }

The Ruby code is not only simpler and more compact, it's far easier to conceptualize. Why should a beginner need to know what a pointer is to complete a task like this, let alone understand how pointers and arrays are related in C? Making an absolute beginner learn stuff like this right off the bat is like teaching someone to fly on a 747. As splitpea said, you're best off learning the basics like flow control with a high-level language where you don't have to worry about managing memory, buffer overruns, pointers, and so on. There's time to learn about those things later, when you're ready to find out what's actually going on under the hood.

However, I don't think there's a problem with learning procedural programming, then object-oriented programming, and THEN digging down into the nitty gritty of the hardware. Many OOP languages so heavily abstract the underlying system that you don't really need to know anything about memory management and such in order to grasp the concepts. That said, I don't think you'll ever really understand object-orientation without a grounding in the fundamentals of memory management and so on. A good programmer will need to know all of it, of course. But if you throw a beginner headlong into C right off the bat, that person may well end up not a programmer at all.
 
Code:
sensor_data().each_with_index{ |value, index| puts "Measurement #{index + 1}: #{value}" }

The Ruby code is not only simpler and more compact, it's far easier to conceptualize. Why should a beginner need to know what a pointer is to complete a task like this, let alone understand how pointers and arrays are related in C? Making an absolute beginner learn stuff like this right off the bat is like teaching someone to fly on a 747.

I'm sorry, but I'd disagree. Telling somebody to learn Ruby instead of C is telling somebody to learn to fly a 747 instead of a Cessna. The amount of complexity hidden behind that code hides so much that you're not able to conceptualize how to fly (understand your code), but rather tweak knobs on a fancy control panel (call opaque APIs and hope for the best).

From reading the Ruby code, I understand what that is supposed to do, but I don't have a clue what it actually does.

That's my main concern. There's two major goals that people desire:
1) engineering for production
2) rapid prototyping

They're both good things. But they're opposed to each other. One's goal is to make something that's architecturally sound and designed to be actually used by a massive audience. The other is to get something to the point of working to prove a concept.

Ruby, JS, Python, etc. is biased towards the rapid prototyping camp because it's so much easier to get something up and running. But it's so much harder to get something performant.

In C, you can see exactly what's going on because it's practically a human readable assembler. It'll take you a lot longer to build something large in C, but dive right into your debugger and profiler and you'll know what's going on whenever you want.

When you're starting out, you need to know the basics.

As an aside:
How many people here know what Cocoa Bindings are? How many people here have used them? It's a perfect example of how something that sounds conceptually awesome and should make life easier for the programmer can turn into absolute hell once you need to debug it.
 
C or Java?

The answer depends on what you're trying to accomplish. Both can be used to create quality, large scale systems.

If you're interested in Mac Programming, C would be better.

If you're interested in getting a programming job with a big company, Java would be better most likely.

But it really depends on your goal.
 
I'm sorry, but I'd disagree. Telling somebody to learn Ruby instead of C is telling somebody to learn to fly a 747 instead of a Cessna. The amount of complexity hidden behind that code hides so much that you're not able to conceptualize how to fly (understand your code), but rather tweak knobs on a fancy control panel (call opaque APIs and hope for the best).

You're not hoping for the best. You're telling the computer what you want it to do, and it's doing it. This is the fundamentals of programming. With a higher level language, you don't need to be as explicit about each step. Which is a Good Thing, because you're not trying to learn to think like a computer. You're trying to learn to think like a programmer first. A good programmer can easily learn to think like a computer. It's very hard to teach a computer to think like a programmer.

From reading the Ruby code, I understand what that is supposed to do, but I don't have a clue what it actually does.

That's nonsense. You know exactly what the code does from reading it (assuming you understand the syntax of the language, of course). You may not be aware exactly how the computer is carrying your instructions out, but so what? By that logic, you can't learn C until you've taken sufficient courses in physics, electronics, and microprocessor design to understand exactly what's happening inside the wiring.

That's my main concern. There's two major goals that people desire:
1) engineering for production
2) rapid prototyping

They're both good things. But they're opposed to each other. One's goal is to make something that's architecturally sound and designed to be actually used by a massive audience. The other is to get something to the point of working to prove a concept.

Ruby, JS, Python, etc. is biased towards the rapid prototyping camp because it's so much easier to get something up and running. But it's so much harder to get something performant.

This argument has been done to death, and it's obviously disproved by the large number of high-traffic websites out there running code based on Ruby, Javascript, Python, PHP, and so on. How many sites out there are running on C?

Performance for the sake of performance is a pointless goal, given that processors are fast enough now that we can legitimately place more value on the programmer's time and effort than the computer's processing capability.

In C, you can see exactly what's going on because it's practically a human readable assembler. It'll take you a lot longer to build something large in C, but dive right into your debugger and profiler and you'll know what's going on whenever you want.

If your code isn't working the way you intend it to, you've made a mistake. Find it and fix it. That's true in any language. In C, it's just a lot easier to make mistakes, because there are more mistakes to make. The only thing you're learning is what hoops you have to jump through. I guarantee you that if your code doesn't work as intended, it's not Ruby's fault. It's yours.

And performance is a non-issue for people learning to program. Most of the time, it's a non-issue period. In the event that it becomes crucial for some reason, well, that's the time to optimize your code, or drop down into a lower level programming language like C.

In fact, there's an argument to be made that you're better off learning to program on a language that offers less in terms of raw performance. Let's say you write some code that's horribly inefficient for some reason. If you write that code in C, given how fast processors are nowadays you may not even realize it. Write it in a high-level language like Ruby or Python, though, and it might start to drag. Guess what? That's a great opportunity to learn to optimize your code! The novice C programmer, however, is likely to continue blithely onward, remaining blissfully unaware that he's just picked up a horrible programming technique. Just because you can make a bubble sort go really really fast in C doesn't make it an efficient sorting algorithm.

When you're starting out, you need to know the basics.

You're confused about what constitutes "the basics". Flow control, variables, functions, logic—these are the basics. Debugging a buffer overrun and casting pointers is the advanced stuff. Would you forbid people from driving until they can repair an engine?
 
You're confused about what constitutes "the basics". Flow control, variables, functions, logic—these are the basics. Debugging a buffer overrun and casting pointers is the advanced stuff. Would you forbid people from driving until they can repair an engine?

I sort of agree - I'm OK with people learning something like Ruby or Python to get their feet wet with flow control, etc. But get them into a lower level language class sooner rather than later. Interestingly, I've found over the years that folks I've worked with or hired who have a solid grounding in a lower-level language are well placed to pick up the Rubies and Pythons of the world but less so the other way around. They can do it, just seems more of a struggle. And the lessons learned in debugging something like C play well in troubleshooting really complex issues in large systems they may one day help write.

Just my observations and 2 cents.
 
C++ is by far the most powerful language with proven concepts decades old. Once a sufficient level has been reached it is virtually IMPOSSIBLE to write bad C++ code. Now people who can't understand things like the fence post concept are just low quality programmers and cannot be trusted to work on mission critical software (this is software where the programmers are paid good and treated like rockstars). Hence you have many dumbed down languages where developers work in cubes and are essentially commodity resources.

Java has its purposes, mainly because of its VM feature. Other than that, it is a horrid language with many many annoying restrictions. It's non-deterministic running time makes it a no-go for anything mission-critical with a time component.
 
Anyone who can't write a for loop shouldn't be working as a programmer. There needs to be a minimum level of competence. Anyway, when that 128 byte cache-lined haswell comes out SOMEONE is going to have to write the optimized code... very likely me.


Sure, but you had to think about it. And if you had accidentally typed i<=30 instead of i<30, you're now executing 31 times. On the other hand, a Ruby statement like "30.times" will execute, well, 30 times.

This is a trivial example, though. Let's say you have a function that collects sensor data and stuffs it into an array of indeterminate length. After collecting the data, you then want to print out the results, one per line.

In C, your code would look something like this:

Code:
	float data_array[ARRAY_SIZE] ; /* The array that will hold our sensor data */
	int sensor_measurements ; /* Holds the return value of collect_sensor_data(), which indicates how many measurements were collected */
	int i;
	sensor_measurements = collect_sensor_data(data_array, ARRAY_SIZE) ; /* collect_sensor_data() takes the maximum number of measurements so we don't overrun our buffer */
	for(i=0 ; i < sensor_measurements; i++) {
		printf("Measurement %i: %f\n", i+1, data_array[i]);
	}

The equivalent code in Ruby would be:

Code:
sensor_data().each_with_index{ |value, index| puts "Measurement #{index + 1}: #{value}" }

The Ruby code is not only simpler and more compact, it's far easier to conceptualize. Why should a beginner need to know what a pointer is to complete a task like this, let alone understand how pointers and arrays are related in C? Making an absolute beginner learn stuff like this right off the bat is like teaching someone to fly on a 747. As splitpea said, you're best off learning the basics like flow control with a high-level language where you don't have to worry about managing memory, buffer overruns, pointers, and so on. There's time to learn about those things later, when you're ready to find out what's actually going on under the hood.

However, I don't think there's a problem with learning procedural programming, then object-oriented programming, and THEN digging down into the nitty gritty of the hardware. Many OOP languages so heavily abstract the underlying system that you don't really need to know anything about memory management and such in order to grasp the concepts. That said, I don't think you'll ever really understand object-orientation without a grounding in the fundamentals of memory management and so on. A good programmer will need to know all of it, of course. But if you throw a beginner headlong into C right off the bat, that person may well end up not a programmer at all.
 
Sure, but you had to think about it. And if you had accidentally typed i<=30 instead of i<30, you're now executing 31 times. On the other hand, a Ruby statement like "30.times" will execute, well, 30 times.

No I didn't, because I'm a good programmer. ;)

Look, I like Ruby as much as the next guy and use it quite frequently, but the point you're making here isn't really a good one.

Sure, if I typed i<=30 instead of i<30, the loop will run 31 times. In Ruby, if you type "31.times" instead of "30.times", the loop will run 31 times. Both are 1-character typos, and to me, both are as obvious as each other. Just because you're not fluent enough at C to be able to read and write it without thinking doesn't suddenly mean Ruby is better.
 
I'm going to restrict myself to which is the better learning language. Which language is better is like asking whether a hammer is better than a screwdriver. People who don't understand this are not to be trusted.

I would say that Java is easier to learn mainly because it protects you from "strange" errors. That is, programming errors will give you more sensible error messages than C. Errors in C can sometimes lead to weird behavior because you're operating fairly close to the OS as opposed to Java.

Some people seem to think that it will put hair on your chest. I happen to think one should go with the stuff that is easiest to learn first and once you have a solid foundation, you can move on to other languages. When teaching programming, most universities start you out with Java which seems to support that notion.

I think my progression was something like Pascal -> C -> Assembler -> Python -> Java -> Perl (Yikes!) -> Ruby -> Scala (plus messing around in a handful of other languages). That means I went from fairly high-level to very low-level and then back up. I feel that is an excellent way to learn.

BTW, some people recommend Python or Ruby as a first programming language. It depends on how you learn. I personally feel that these are difficult languages to learn (properly) because of their multiparadigm nature. You are not expected to understand the previous sentence :)

If you ask me what is my favorite language I would say Scala currently. Would I recommend it as a first programming language? No.
 
Last edited:
No I didn't, because I'm a good programmer. ;)

So, you've never made a fencepost error then? Not even once?

Sure, if I typed i<=30 instead of i<30, the loop will run 31 times. In Ruby, if you type "31.times" instead of "30.times", the loop will run 31 times. Both are 1-character typos, and to me, both are as obvious as each other. Just because you're not fluent enough at C to be able to read and write it without thinking doesn't suddenly mean Ruby is better.

Typing 31 instead of 30 would stick out like a sore thumb, and would be noticed immediately. Typing <= instead of < is something you might do without even realizing it, and the mistake might not be caught until it causes a problem, if even then. Or say you were iterating over a subsection of an array, from j to k. So you'd type "for(i=j; i<=k, i++)". But later, you say to yourself, "Aha, what if k exceeds the size of the array?" Well, the array size is set via #define. So just prior to your for loop, you insert "if(k>ARRAY_SIZE) k = ARRAY_SIZE;". Oops.

Think it never happens? Think again. The fact is that this sort of error is common enough that there's an entire Wikipedia article dedicated to it. It doesn't matter how fluent you are at C; if you think you're immune to making such errors, you're not a good programmer, just an arrogant one.
 
Think it never happens? Think again. The fact is that this sort of error is common enough that there's an entire Wikipedia article dedicated to it. It doesn't matter how fluent you are at C; if you think you're immune to making such errors, you're not a good programmer, just an arrogant one.

I'm sorry. I can't resist.

I agree that off by one errors happens even to experienced programmers which is why the foreach construct exist in many languages. The basic "iterating through a list" for loop is, however, not that difficult once you've done it 1000 times.

I don't think one example, however elegant, is enough to judge a language as a good learning language. You need to take a more general look.

On that note: In Ruby you have lambda expressions in the form of blocks. You also have functions in the form of Procs which is pretty much the same thing although Procs are actually objects. Then you have methods which is the OOP approach but then you can have methods within the methods which, while extremely useful, sort of breaks the analogy. So at least four interelated concept for "a chunk of code out can execute repeatedly". Also, you get closures which is IMHO a difficult concept to grasp the first time around.

In Ruby you have different methods for the same thing and almost the same thing. What exactly is the difference between map and collect? There isn't one. However, what about the difference between each and map? There is a significant difference. Lots of sugar can also obscure what is going on and when is sugar actually stuff with different behaviour that just look like sugar? What about truly magic stuff like the no such method trick? If you look at other people's code (and you should) you're bound to run into it.

In Ruby you have the functional side-effect free programming concept mixed with OOP programming which promotes the idea of mutable objects. By the way, you also have Perl like implicit variables if that's your thing. A beginner would have no idea when to use one and when to use the other.

Powerful, fun, and some times elegant? Yes. My first language of choice? No. It depends on how you learn but I think Ruby would have confused me a great deal.

Finally, a pet peeve of mine: In you example you use sensor_data()... implying that this is a function but since no-args parentheses are optional in Ruby you could just as easily have written sensor_data... which, to the untrained eye, looks like a variable. This is confusing and can lead to weird errors.
 
I agree that off by one errors happens even to experienced programmers which is why the foreach construct exist in many languages. The basic "iterating through a list" for loop is, however, not that difficult once you've done it 1000 times.

Of course it's not that difficult. It's a trivial example to demonstrate the point.

On that note: In Ruby you have lambda expressions in the form of blocks. You also have functions in the form of Procs which is pretty much the same thing although Procs are actually objects.

Blocks are objects. Specifically, they're Procs. A Proc can also be explicitly generated and passed around, but you can pass a block to a method and it'll show up as a Proc.

Then you have methods which is the OOP approach but then you can have methods within the methods which, while extremely useful, sort of breaks the analogy.

Why? Methods are an object just like everything else in Ruby. Ergo, they can contain other objects. It's actually quite elegant in its simplicity.

So at least four interelated concept for "a chunk of code out can execute repeatedly". Also, you get closures which is IMHO a difficult concept to grasp the first time around.

I only count two: Methods and Procs. Methods are tied to an object, Procs are not.

In Ruby you have different methods for the same thing and almost the same thing. What exactly is the difference between map and collect? There isn't one.

Yes, it's called aliasing. It's very clear from the documentation that collect and map are synonyms. I'm not sure what the big deal is here.

However, what about the difference between each and map? There is a significant difference.

Okay, so? Again, yes, some methods have aliases, and some methods actually do different things. A quick glance at the documentation will clear this up if you're confused.

Lots of sugar can also obscure what is going on and when is sugar actually stuff with different behaviour that just look like sugar? What about truly magic stuff like the no such method trick? If you look at other people's code (and you should) you're bound to run into it.

There's nothing magic about method_missing. It works exactly like you'd expect in an object oriented language. Again, if you're confused about how a specific language function works, the documentation is all right there. I guarantee you I could explain how method_missing works to a new programmer more easily than I could explain pointers.

In Ruby you have the functional side-effect free programming concept mixed with OOP programming which promotes the idea of mutable objects. By the way, you also have Perl like implicit variables if that's your thing. A beginner would have no idea when to use one and when to use the other.

Unless the beginner were to, say, read a tutorial maybe?

Finally, a pet peeve of mine: In you example you use sensor_data()... implying that this is a function but since no-args parentheses are optional in Ruby you could just as easily have written sensor_data... which, to the untrained eye, looks like a variable. This is confusing and can lead to weird errors.

That's a feature, not a bug, and it's a very cool one once you understand it. Yes, in this case I added the parentheses to indicate it was a method for this specific example (I didn't want anyone thinking I was cheating and just accessing a variable rather than calling a function), but I'd normally leave them out. The reason for this is that it doesn't matter whether sensor_data is a variable or a method. This is quite powerful once you grok it.

Consider a Price object, that can store and retrieve a value in dollars or euros. Rather than having a variable called "dollars" and methods called "dollars()", "euros()", "set_dollars(x)", and "set_euros(x)", you can just have a variable called "dollars" and methods called "euros" and "euros=(x)". You have the euros method return convert the dollars value to euros and return it, and the euros= method convert the value to dollars and store it. Now, users of your class can treat dollars and euros as "virtual variables" that are simply different abstractions of the same thing. And what's more, if you later change your class to be euro-centric (or even pound-centric) instead of dollar-centric, you don't need to modify any of the code that refers to your class.

Don't get me wrong. Ruby can and will give you more than enough rope to hang yourself. That's a side effect of being powerful and useful. But even though it's a fairly complicated language under the surface, from the standpoint of a novice you can write very clear and effective procedural code, and understand what exactly it is you're doing from a strictly logical viewpoint. You can also learn the fundamentals of OOP in a language that's designed around the concepts. In my opinion, those are far more important things to learn than pointer arithmetic and making sure you null-terminate your strings. And a new programmer who is seeing results more quickly because he's not trying to figure out why his program is spitting out garbage for some inexplicable reason is more likely to stick with it.
 
First of all: Did you learn Ruby as your first programming language? Just wondering.

Of course it's not that difficult. It's a trivial example to demonstrate the point.

Except that a trivial example doesn't actually demonstrate the point. But I get it :)

Blocks are objects. Specifically, they're Procs. A Proc can also be explicitly generated and passed around, but you can pass a block to a method and it'll show up as a Proc.

Why? Methods are an object just like everything else in Ruby. Ergo, they can contain other objects. It's actually quite elegant in its simplicity.

If you don't know what an object is then that is difficult to grasp: "An object is a structure that has state and behavior. Except that in Ruby the behavior is actually objects. Which is also the case with the state. So what exactly is the difference between state and behavior? Well," ... etc.

Compare that with Java: "An object is a structure that has state and behavior". Done. No fancy stuff. My experience is that beginners have difficulties distinguishing between classes and objects. That should be the focus in the beginning.

I only count two: Methods and Procs. Methods are tied to an object, Procs are not.

Conceptually there are more than two, at least in the eyes of the beginner.

Yes, it's called aliasing. It's very clear from the documentation that collect and map are synonyms. I'm not sure what the big deal is here.

The problem is bloat. There is simply no good reason for having more than one method that does the same thing as another method. In my opinion there is no good reason for each to exist or each_with_index. What if you really needed a map_with_index? Well, it's not there. Actually, what you probably really needed was a zip_with_index since that would be usable for both each and map. It's not there.

Okay, so? Again, yes, some methods have aliases, and some methods actually do different things. A quick glance at the documentation will clear this up if you're confused.

But will the beginner realize why the difference is significant? In fact, a beginner that says "I'm confused. Why is that method ever necessary. Why not just use map?" would be especially promising.

There's nothing magic about method_missing. It works exactly like you'd expect in an object oriented language. Again, if you're confused about how a specific language function works, the documentation is all right there. I guarantee you I could explain how method_missing works to a new programmer more easily than I could explain pointers.

I'm not confused. I'm saying that I believe a beginner would be confused and the documentation, although very good in Ruby, is not necessarily the best teaching tool.

Unless the beginner were to, say, read a tutorial maybe?

No. Are you going to explain to a beginner the difference between declarative and imperative style, what programming paradigm promotes what style, and when you should use on thing over the other? Having two ways of doing the same thing leads to confusion. Note that I'm not arguing against having two ways of doing the same thing. I'm arguing against having it in a teaching language. What about implicit variables? I would argue that they are mostly useful in the case of regular expressions but should generally be avoided in most other situations.

That's a feature, not a bug, and it's a very cool one once you understand it. Yes, in this case I added the parentheses to indicate it was a method for this specific example (I didn't want anyone thinking I was cheating and just accessing a variable rather than calling a function), but I'd normally leave them out. The reason for this is that it doesn't matter whether sensor_data is a variable or a method. This is quite powerful once you grok it.

Consider a Price object, that can store and retrieve a value in dollars or euros. Rather than having a variable called "dollars" and methods called "dollars()", "euros()", "set_dollars(x)", and "set_euros(x)", you can just have a variable called "dollars" and methods called "euros" and "euros=(x)". You have the euros method return convert the dollars value to euros and return it, and the euros= method convert the value to dollars and store it. Now, users of your class can treat dollars and euros as "virtual variables" that are simply different abstractions of the same thing. And what's more, if you later change your class to be euro-centric (or even pound-centric) instead of dollar-centric, you don't need to modify any of the code that refers to your class.

So what you're saying is that in the case of properties, it is a feature, but I would argue that it's confusing in other respects.

Don't get me wrong. Ruby can and will give you more than enough rope to hang yourself. That's a side effect of being powerful and useful. But even though it's a fairly complicated language under the surface, from the standpoint of a novice you can write very clear and effective procedural code, and understand what exactly it is you're doing from a strictly logical viewpoint. You can also learn the fundamentals of OOP in a language that's designed around the concepts. In my opinion, those are far more important things to learn than pointer arithmetic and making sure you null-terminate your strings. And a new programmer who is seeing results more quickly because he's not trying to figure out why his program is spitting out garbage for some inexplicable reason is more likely to stick with it.

I think a simple learning language is better than a complicated learning language that, as you say, gives you more than enough rope to hang yourself with. I don't consider C the best alternative. Java is a rather verbose OO-language that forces you to realize what is actually going on at the language level. Not a lot of features. That's unfortunate when it comes to the rest of us and I would personally choose Ruby over Java any day all else being equal, but I'm not a beginner.
 
Last edited:
If you ask me what is my favorite language I would say Scala currently. Would I recommend it as a first programming language? No.

Well, my favorite language is AL, but for most applications, it is highly impractical these days, and not really even faster than what you get with a good compiler. Perhaps a good place to start might be classic MS Basic, complete with numbered lines and all. Teach about a month of that, then move on to a more modern language, at which point the student will find the change in structure and syntax tremendously empowering. Kind of like holding back on the reins at first, then letting the horse run.
 
It depends largely on what type of applications you want to write. Server-Side Business Applications are more often than not written in Java (and Java EE).

However, for you to better understand the basics of what is going in your computer when your code runs I would say C (or maybe even Assembler ;-)). You will learn what a piece of text actually looks like in memory, how you create and dispose used memory et cetara.

Also, if you got to work with Objective C (IMHO a horrible language, but I guess if you like a mix of SmallTalk and C you may have a different opinion) you will eventually have to learn some concepts of C anyway.
 
If you are a beginner, then I would suggest Java. It is usually the first programming language taught by university. I think it's because it's conceptually easier to learn. Once you've decided that you like programming then you should move on to C. I think C has more challenging concepts. But the programming syntax is fairly easy to understand especially if you already know Java. Plus it's just more gratifying because you will be able to more sooner with Java then with C because of the Java libraries. You can work with GIFs, GUIs, etc, fairly sooner with Java then with C.
 
Easier coding == less power
difficult coding == more power

This is why VB is easy to code but lacks power and why C++ is difficult to code but has more power.

Easier coding languages might but be ok for learning but it teaches you how to be a lazy programmer instead of taking time into optimizing your code. You also have less power.

Honestly the best language to learn is Java. Once you learn java C++ is easy. Master these two languages and you can code in any language.

Sure, but you had to think about it. And if you had accidentally typed i<=30 instead of i<30, you're now executing 31 times. On the other hand, a Ruby statement like "30.times" will execute, well, 30 times.

This is a trivial example, though. Let's say you have a function that collects sensor data and stuffs it into an array of indeterminate length. After collecting the data, you then want to print out the results, one per line.

In C, your code would look something like this:

Code:
	float data_array[ARRAY_SIZE] ; /* The array that will hold our sensor data */
	int sensor_measurements ; /* Holds the return value of collect_sensor_data(), which indicates how many measurements were collected */
	int i;
	sensor_measurements = collect_sensor_data(data_array, ARRAY_SIZE) ; /* collect_sensor_data() takes the maximum number of measurements so we don't overrun our buffer */
	for(i=0 ; i < sensor_measurements; i++) {
		printf("Measurement %i: %f\n", i+1, data_array[i]);
	}

The equivalent code in Ruby would be:

Code:
sensor_data().each_with_index{ |value, index| puts "Measurement #{index + 1}: #{value}" }

The Ruby code is not only simpler and more compact, it's far easier to conceptualize. Why should a beginner need to know what a pointer is to complete a task like this, let alone understand how pointers and arrays are related in C? Making an absolute beginner learn stuff like this right off the bat is like teaching someone to fly on a 747. As splitpea said, you're best off learning the basics like flow control with a high-level language where you don't have to worry about managing memory, buffer overruns, pointers, and so on. There's time to learn about those things later, when you're ready to find out what's actually going on under the hood.

However, I don't think there's a problem with learning procedural programming, then object-oriented programming, and THEN digging down into the nitty gritty of the hardware. Many OOP languages so heavily abstract the underlying system that you don't really need to know anything about memory management and such in order to grasp the concepts. That said, I don't think you'll ever really understand object-orientation without a grounding in the fundamentals of memory management and so on. A good programmer will need to know all of it, of course. But if you throw a beginner headlong into C right off the bat, that person may well end up not a programmer at all.
 
Seriously, you'll never want to write
Code:
for(int i=0; i<30; i++) {
...
}
/* Does this execute 29, 30, or 31 times? */

Franky, if you know how the for construct works and you can't work out what that loop means then you're probably not a very good programmer. It runs 30 times, zero to 29.

I hate that kind of quasi-English code that Ruby seems to be using, might as well be writing COBOL.

----------

Sure, but you had to think about it.

No. It's zero to 29, 30 times.

And if you had accidentally typed i<=30 instead of i<30, you're now executing 31 times. On the other hand, a Ruby statement like "30.times" will execute, well, 30 times.

This is a trivial example, though. Let's say you have a function that collects sensor data and stuffs it into an array of indeterminate length. After collecting the data, you then want to print out the results, one per line.

In C, your code would look something like this:

Code:
	float data_array[ARRAY_SIZE] ; /* The array that will hold our sensor data */
	int sensor_measurements ; /* Holds the return value of collect_sensor_data(), which indicates how many measurements were collected */
	int i;
	sensor_measurements = collect_sensor_data(data_array, ARRAY_SIZE) ; /* collect_sensor_data() takes the maximum number of measurements so we don't overrun our buffer */
	for(i=0 ; i < sensor_measurements; i++) {
		printf("Measurement %i: %f\n", i+1, data_array[i]);
	}

The equivalent code in Ruby would be:

Code:
sensor_data().each_with_index{ |value, index| puts "Measurement #{index + 1}: #{value}" }

The Ruby code is not only simpler and more compact...

That Ruby code is completely unreadable....
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.