Apr 12, 2011, 11:24 AM
I am looking for the best, lowest-level possible description of why Verizon's CDMA can't do voice and data simulatenously. I looked through Google and wikipedia but I couldn't find a good description of it. I have a decent understanding of communications, so get as deep as you want. I just want to understand it.
Apr 12, 2011, 11:28 AM
Its a limitation on the software and hardware.
Apr 12, 2011, 11:35 AM
Well here's something that I posted hidden in another thread, with a few edits. I think it deserves more attention.
Just because something is more widely adopted, doesn't mean it's better. Asian countries use CDMA, and they are the best of the best when it comes to all things cellular/mobile. Also, want to know the real reason GSM became the world standard? In Europe, various governments decided that they (the Europeans) had designed the ultimate digital cellular system, and they passed laws making it illegal to deploy anything except GSM, whose primary supporters/suppliers were Nokia, Ericsson, Siemens and Alcatel.
Let's expand upon this.
The original cell phones were analog, using fairly straightforward FM for voice communication. When your phone was in a call, it was granted a frequency by the cell and used it exclusively for the entire duration of the call. FM encoding is extremely inefficient in use of bandwidth, and spectrum was scarce and expensive, and it rapidly became clear that FM wasn't able to handle the traffic which was expected and which was really needed to make cellular telephony a profitable business. One obvious approach was to use digital communications, and to take advantage of advances in microprocessor and digital IC technology to compress the voice traffic going both directions, and thus you saw deployment of the first Time Division Multiple Access (TDMA) digital systems. What they do is to take a single channel and timeshare it among several phones, who digitize and compress their voice traffic and transceive it during their timeslice. With IS-136, a 30 KHz channel which had carried only one voice call with AMPS could now carry three digitized calls.
GSM went further than that, and abandoned the old channel size entirely. It allocated 200 KHz channels and divided them into 8 slices, giving each phone somewhat less than 25 KHz effective bandwidth. (There are some losses due to time guardbands and protocol overhead.)
GSM also included a very powerful set of features above that, and included some interesting features not directly associated with the RF link, such as a personality module which contained a customer's phone number and billing information that could be moved to another phone any time the customer wished to. (That particular featured turned out to be a decidedly mixed blessing. While that ability was very convenient for legitimate customers, it was also a magnet for thieves and frauds.)
GSM was clearly superior to IS-136 or such abortions as IDEN (a Motorola design which never became an industry standard because Moto was never willing to license it, which meant that systems which adopted it could only get infrastructure and handsets from Motorola).
In the computer industry we talk about the "ISO seven layer model", where the process of communication is modularized and each layer uses the one below it without worrying how the lower layer actually works. TCP works whether the physical layer is 802.11b or ethernet or something else entirely, and TCP itself doesn't change based on that. TCP uses IP, and IP uses the datalink layer, and the problems of the physical layer are dealt with by the datalink layer. But if the physical layer is a 56 KBaud modem, then there are things which won't be possible, which might be possible with 100 megabit ethernet. No amount of work at higher levels can compensate for the fundamental superiority of ethernet over a telephone modem.
Cell phone protocols do the same kind of thing. There's an RF layer and protocols above that, some of which can be very high level and quite abstract, such as the one which controls sending of text messages. However, the change from analog to TDMA was a change at the RF layer. CDMA was yet another approach to the RF layer, which was radically different again. (IS-95 is a specification for a complete protocol stack which includes CDMA as its RF layer.)
In fact, CDMA was so revolutionary that when it was first discussed, many thought it couldn't be made to work. Indeed, at least one European company deeply involved with GSM, Ericsson, went through the three classic stages of Not Invented Here syndrome:
1. It's impossible.
2. It's infeasible.
3. Actually, we thought of it first.
In IS-95 CDMA, a single carrier frequency has a bandwidth of 1.2288 MHz, and up to 40 cell phones in a given sector can all be transmitting chips at that rate on the same carrier frequency, which seemed on first examination to assume that it was possible to send fifty million bits through a one-and-a-quarter MHz band, which would indeed violate Shannon. The mistake they made was that chips aren't "information" based on Shannon's definition, and though those phones were sending chips that fast, they were actually sending bits (real data) at no more than 14,400 bits per second each. (I'll try not to get too bogged down in technical details here, but to some extent it's unavoidable.)
Unfortunately, Qualcomm did a field test in New York City where several prototype phones mounted in vans were able to operate at once on the same frequency talking to multiple cells all of which also operated on the same frequency.
The next argument was that though it seemed technically possible, it would be too expensive. Everyone knew that the electronics required to make CDMA work was a lot more complicated than what TDMA used, and Ericsson's loud voices claimed that it could never be reduced in price enough to make it competitive. And shortly thereafter Qualcomm proved that wrong, too, by beginning to produce both infrastructure and phones at very competitive prices. (Qualcomm did this to bootstrap the industry. It's no longer in either business.)
After which Ericsson suddenly decided that it had applicable patents and took Qualcomm to court. Over the long drawn out process of litigation, every single preliminary court judgment went in favor of Qualcomm, and it became obvious that Ericsson didn't have a case and that Qualcomm wasn't going to be intimidated. Ultimately, the entire case was settled in a massive omnibus agreement where Ericsson became the last of the large companies in the industry to license Qualcomm's patents (on the same royalty terms as everyone else) while taking a large money-losing division off Qualcomm's hands and assuming all the liabilities associated with it, and granting Qualcomm a full license for GSM technology. The industry consensus was that this represented a full scale surrender by Ericsson.
Nokia wasn't anything like as foolish and had licensed several years before. (Just in passing, the fools at Ericsson are in the front office. Their engineers are as good as anyone else's.)
Still, in the years of apparent chaos in the US, when loud voices in Europe proclaimed the clear advantage of a single continental standard, order began to appear out of the chaos here. Small companies using the same standards set up roaming agreements, and then started merging into larger companies, which merged into yet larger ones. One company (Sprint) started from scratch to build nationwide coverage. Bell Atlantic Mobile acquired GTE Mobile (who had been a joint partner in PrimeCo), and eventually merged with Airtouch to form Verizon, all of which was based on IS-95 CDMA, mostly on 800 MHz. Sprint eventually implemented a reasonable nationwide system also based on CDMA. The last major nationwide system to form was Cingular, after the various GSM carriers in the US realized they were in big trouble competing against Verizon and Sprint and AT&T (which uses IS-136).
Once the existence and commercial feasibility of CDMA were established beyond doubt, other aspects of it began to become clear. At the RF layer, CDMA was obviously drastically superior to any kind of TDMA. For one thing, in any cellular system which had three or more cells, CDMA could carry far more traffic within a given allocation of spectrum than any form of TDMA. (Depending on the physical circumstances, it's usually three times as much but it can be as much as five times.) For another, CDMA was designed from the very beginning to dynamically allocate spectrum.
In TDMA, a given phone in a given voice call is allocated a certain fixed amount of bandwidth whether it needs it or not. In IS-136 that's a bit less than 10 KHz, in GSM it's somewhat less than 25 KHz. (Going each direction; the total is twice that.) But humans don't use bandwidth that way; when you're talking, I'm mostly listening. So your 25 KHz channel to me is carrying your voice, and my 25 KHz channel to you is carrying the sound of me listening to you silently.
In CDMA, the amount of bandwidth that a given phone uses changes 50 times per second, and can vary over a scale of 8:1. When I'm silent, I'm only use 1/8th of the peak bandwidth I use when I'm talking. (But I don't actually send full rate most of the time even when I'm speaking.) That's very useful for voice but it's essential for data which tends to be extremely bursty, and CDMA was born able to do this. It's always had that capability. It's also always had the ability for different phones to be given different overall allocations of bandwidth, because the initial standard included both 8K and 13K codecs (which respectively use 9600 baud and 14,400 baud). So when higher data rates were desired, it was possible to augment the cell and create new cell phones which could transmit 56 kilobits per second using the same frequency as existing handsets.
When GSM wanted to do that (send data at a rate faster than the existing voice channel supported), they ended up having to allocate an entirely new carrier just for that job, which handled nothing except data, and to deploy entirely new infrastructure for it. The resulting system is called GPRS, and in many ways it turned out to be very unsatisfactory for the operating companies because it's really expensive to deploy and because it cuts down on the bandwidth they have available for voice. A given chunk of spectrum must be permanently assigned to one or the other; it can't be reallocated dynamically. Data and voice in CDMA, on the other hand, both use the same carrier and bandwidth is reallocated between the two 50 times per second automatically, and you can implement high speed data without having to install new transmitters in all the cells.
With the push to greater and greater data rates, everyone recognized that a new generation of cellular equipment would be needed, the legendary 3G.
And for the reasons given above, and several others, it was equally clear that it had to use a CDMA air interface. GSM was the very best propeller-driven fighter money could buy, but CDMA was a jet engine, and ultimately TDMA could not compete. The fundamental weakness of TDMA at the RF layer could not be compensated for at any layer higher than that, no matter how well designed it was. GSM/TDMA was a dead end, and to create 3G, Europe's electronics companies were going to have to swallow their pride and admit that Qualcomm had been right all along.
This article in the Economist (old, but relevant) says that it's not going well. When Qualcomm and its partners designed a new 3G system with new capabilities, they were able to make it backward compatible with IS-95. The new standard is called CDMA 2000, and a CDMA2K handset can work with IS-95 infrastructure, and an IS-95 handset can work with CDMA2K infrastructure, and CDMA2K cells can sit next to IS-95 cells and use the same frequencies. Thus existing operating companies using IS-95 can upgrade incrementally replacing individual cells as budget allows and selling new handsets without having to wholesale replace all existing ones at once. Most important of all, it means that you can take an existing system using an existing spectrum license, and phase it over without acquiring any new spectrum.
None of that is true for GSM. CDMA and TDMA are fundamentally incompatible and there's no way to create a new system (which they're calling WCDMA) which can support existing TDMA handsets. It's technically impossible for the new standard to be backward compatible. Worse is that there's no easy way to phase existing spectrum over. In practice, when WCDMA appears, existing GSM systems will have to install it all, issue new handsets to all customers, and then one day throw a switch -- or, what they did, is license new spectrum for WCDMA while continuing to run GSM on the existing spectrum for legacy customers.
Couple of other keynotes? Dropped calls on att? Ring a bell? Wouldn't have happened on sprint/vzw. Want to know why? Cdma towers have about 3x the call capacity per tower vs gsm towers. That's a lot... also, with cdma, your call is routed to two three towers that are in range, if one tower drops off, (I.e. driving or on a train) the other towers seamlessly pick it up and your good to go. With gsm, your call only gets routed to one tower at a time, thus making tower handoff much more difficult to say the least. Especially when your on the fringe of two or more towers with equal or fluctuating signal. Dropped call city!
Also the better data management with cdma allows for clearer calls, more reliability, and as mentioned, more capacity.
Those things add up to be a much better option than gsm. Gsm does have a few pro's, but are far outwieghed by the con's, vs it's cdma counterpart. And what most people don't know, is gsm now uses wcdma(a cdma varient) for their 3g networks. That is why gsm can do voice + data, because it is receiving two signals instead of one.