Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
SATA3 is 6Gb/s which gets you above 300MB/s (750MB/s)... although the platter based hard drives are much slower. You can only read data from the platters at around 60 to 80MB/s or so.

While I mostly agree with what you've said, I'm not sure what part of my post you're responding to?

Also, plenty of HDDs get well over 100 MB/s, 150 MB/s sequential R/W isn't unheard of these days for desktop drives.

Lol oh I just figured out what you were talking about. Yes, SATA is an interface capable of a theoretical 6 Gb/s, but an actual of 4.8 Gb/s (600 MBps) due to overhead. That said, it's just a medium/interface, the same way that we had SCSI-320 as far back as 1990 but no single (or really even any RAID configs) that could come close to saturating it.
 
The 9x35 series of basebands from Qualcomm was designed for production on TSMC's 20nm fabs. The same fabs that pretty much everybody except for Intel is trying to get capacity on now. The rumour is yields aren't the greatest yet, which is why we are not seeing a wave of 20nm chips from all the fabless folks like Nvidia, AMD, and Qualcomm. There are only a few 20nm designs floating around in the wild that I know of at this point and none of them move iPhone numbers. That wave will likely come early next year when yields pick up and product pipelines can align.

Apple would have locked down the hardware design on the iPhone 6 months ago. There is no way they would be insane enough to make a bet on availability of a non-critical part. Especially with their history of always going modest on the radios. Double especially considering most networks have other bottlenecks before you even get to the limited CA and higher 20MHz speeds supported by Cat4 LTE.

I could see Apple betting big on the SoC at 20nm, but not the radio. If the iPhone 6 has MDM9635 I will eat my hat. I will enjoy eating my hat too, because a better radio makes me happy. However, my money remains on MDM9625.
Cool (well actualy sad but in regards to being informed...cool), thanks for the Info...and as a better radio would make me a happy camper too, don't worry about getting sunburn after eating your hat...in the less likely event that the iP6 should realy show up with the 9635 I will gladly provide you with a free replacement hat. ;)
 
It does not work that way if you download a 10 GB file in five seconds versus 10 seconds so it will eat the same amount of data

My whole point is that once you go out and actually start using it, your data disappears almost instantly.

well, if your plan does not meet your requirements it's time to considdr a change. I olso have LTE and can pull down ~40Mbps, do I use bandwith intensive services all thr time? No, but the spped i nice to have the few times i need it remember LTE is no replscement for fixed broadnd (cable, dsl, fiber) my USD0.02

Maybe Apple should fix iOS then? Because sometimes it will just randomly drop WiFi and toggling the WiFi switch in control center will reconnect. But when it drops during a video (in full screen you don't realize it), suddenly it downloads the rest of the video on LTE and it does it so fast that your plan is just gone. I watch a lot of video on my iPad but only on WiFi—until that happens. But it even happens with my iPhone. I also have an AirPort Extreme base station. So somewhere along the line Apple is screwing up something. My house isn't very big and is wooden. It does it on University networks as well as at my parents house and other places. Sorry I can't afford a $200/mo data plan just to get a tiny fraction of what I get from my 100mbps/1TB cap cable line for $54.95/mo. My whole point here is that these cellular companies need to lower data plan prices or give us more data. Should start at 10GB on LTE and go up from there in 10GB chunks at the very least. When you can accidentally blow through your whole plan in 15 minutes of video stream buffering on your $150/mo two-line plan, there is something wrong.
 
That's some interesting info. But I am still wondering if at any given location if the tower can only transmit to only one device at any time?(however small that duration is)...

I gather that cell phone service is like insurance. If everyone claims their benefits at once, the service goes kaput.

It can definitely transmit to more than 1 person at a time, and does. Data does chew up more capacity than voice, and there certainly is a limit on GSM (2G) voice. But a single sector for voice handles around 100 calls at a time. 2 frequency carriers, 6 sectors on a big basestation, you're looking 1200 calls. Data is harder on capacity. For HSPA, one user can eat up an entire sector if they're maxing out the download, but the basestation should be rationing out that capacity. LTE can easily serve 100 users per sector, assuming 10 mhz of bandwidth and a 2x2 MIMO -- after that, more can be supported rationing out total capacity. Rationing is called Overbooking. Increasing the MIMO also increases number of users served. T-Mobile deployed 4x4 upgradeable (4x2 to start) MIMO equipment from the get-go.....
 
that's about right.

While I mostly agree with what you've said, I'm not sure what part of my post you're responding to?

Also, plenty of HDDs get well over 100 MB/s, 150 MB/s sequential R/W isn't unheard of these days for desktop drives.

LoL oh I just figured out what you were talking about. Yes, SATA is an interface capable of a theoretical 6 Gb/s, but an actual of 4.8 Gb/s (600 MBps) due to overhead. That said, it's just a medium/interface, the same way that we had SCSI-320 as far back as 1990 but no single (or really even any RAID configs) that could come close to saturating it.

So 300MB/s is easily accomplished then. 8bits/byte but still has a start and stop bit... just like I2C communication. so 10 bits/byte at 6Gbps is 600MB/s. Blows those ordinary HDD away. lol... I misread your original post too... I thought you said a SSD can't do 300MB/s... lol
 
You assume wrong. Ram is faster then nand flash storage

nand flash the size of the ram would allow it to store off the state of ram when you put it to sleep and boot much faster. it could also use this nand during runtime to cache out lesser used ram using compression similar to how Mavericks operates. i am guessing some combination of these is what they will use it for.
 
One critical aspect of any wireless communication people seem to continuously fail to grasp is that only one device can send or receive at a time.

That means that the faster your connection, the less time it will spend blocking other devices. Alternatively, the more data you can get out of your assigned timeslot. Your device can only send and receive during the timeslot A-B, so its critical to get as much out of that timeslot as you can - as during C-Z other devices will be using the channel and your device will be required to shut up. Obviously, if your device can receive 200KB in its timeslot, then doubling the throughput of the communication between you and the transmitter means you'll get 400KB in that timeslot.

Wireless speeds are always advertised when you have 100% of the timeslots - for obvious reasons however you'll never have 100%. You'll have, generally, a fraction of that. So if yo have a 150mbps link, and get 10% of the timeslot - then you're limited to receiving 15mbps. If you have a 300mbps, with a limit of 10% of the timeslots - you'll receive 30mbps.

However, the faster the devices get, the more likely they are to be able to reduce the amount of timeslots they reserve - meaning those timeslots are opened up to other devices which do require them. So instead of maybe having only 10% of the time, you might be able to get 15% - because other devices no longer reserved that. That means we now push our throughput from 30mbps, to 45mbps.

Thats 3x faster than what we started with, which we got from a 2x link increase.
This is not true. It seems to confuse several different radio concepts. Mostly it looks like old style GSM TDD with hard timeslots and one channel per direction. This hasn't been the way radio has worked for a good while now.


The interaction between UE and BTS is incredibly complex. So is how radio networks distribute loads. There are quite a few mechanisms that are used to serve many handsets at once. One of the primary examples is coding (CDMA). This is where many different handsets are transmitting on the same frequency at once, each with its own code symbol assigned by the BTS. The BTS receives the incoming noise and mathematically applies all of the symbols it is aware of to that random steam. This causes the signal to "rise" out of the noise.

LTE does not really use symbols in this way. It uses OFDM to slice up carriers into many sub-carriers. This allows many pieces of UE to share the same channel. Within the slices it depends on the channel quality index or similar to determine an available modulation (density of the constellation), you can do up 256QAM these days I believe. The more handsets attach to a tower, the lower the data rate is for all of them simultaneously. However, not equally so. Handsets with better quality indexes will probably receive higher throughputs both due to the fact that they will be running denser constellations, but also because they will receive more slices (priority coding). This is for downlink only (tower to handset). Handset to tower uses a different modulation scheme that is more power efficient.


If you want to know more I advise reading http://www.3gpp.org release 8, 10, and 11.
 
Last edited:
This is not true. It seems to confuse several different radio concepts. Mostly it looks like old style GSM TDD with hard timeslots and one channel per direction. This hasn't been the way radio has worked for a good while now.


The interaction between UE and BTS is incredibly complex. So is how radio networks distribute loads. There are quite a few mechanisms that are used to serve many handsets at once. One of the primary examples is coding (CDMA). This is where many different handsets are transmitting on the same frequency at once, each with its own code symbol assigned by the BTS. The BTS receives the incoming noise and mathematically applies all of the symbols it is aware of to that random steam. This causes the signal to "rise" out of the noise.

LTE does not really use symbols in this way. It uses OFDM to slice up carriers into many sub-carriers. This allows many pieces of UE to share the same channel. Within the slices it depends on the channel quality index or similar to determine an available modulation (density of the constellation), you can do up 256QAM these days I believe. The more handsets attach to a tower, the lower the data rate is for all of them simultaneously. However, not equally so. Handsets with better quality indexes will probably receive higher throughputs both due to the fact that they will be running denser constellations, but also because they will receive more slices (priority coding). This is for downlink only (tower to handset). Handset to tower uses a different modulation scheme that is more power efficient.


If you want to know more I advise reading http://www.3gpp.org release 8, 10, and 11.

Very good info! Thanks for posting.
 
That's my biggest question too. Have we heard any new info on the radio chips and whether or not the Verizon users will (finally) be able to talk and look up important info at the same time?

It's not about the chip as much as the number of antennas. Even current chips allow two radios in use at one time.

Therefore many Verizon Android phones have three antennas:

  • LTE transmit
  • CDMA 1x transmit
  • Combined receive
This allows the Android phones to use voice + LTE at the same time.

Apple, on the other hand, only includes two antennas, which can swap the following roles:

  • GSM/UMTS/LTE transmit
  • Receive

This means that Verizon iPhones CANNOT use voice + LTE at the same time.

In fact, no current iPhone can, no matter who the carrier is.

For example, on a GSM carrier like AT&T, if you try to use data and voice at the same time, the LTE connection is dropped and you fall back to using UMTS-3G, which allows simultaneous voice + 3G.

--

Hopefully all carriers will implement voice over LTE soon, and it won't matter.
 
It's not about the chip as much as the number of antennas. Even current chips allow two radios in use at one time.

Therefore many Verizon Android phones have three antennas:

  • LTE transmit
  • CDMA 1x transmit
  • Combined receive
This allows the Android phones to use voice + LTE at the same time.

Apple, on the other hand, only includes two antennas, which can swap the following roles:

  • GSM/UMTS/LTE transmit
  • Receive

This means that Verizon iPhones CANNOT use voice + LTE at the same time.

In fact, no current iPhone can, no matter who the carrier is.

For example, on a GSM carrier like AT&T, if you try to use data and voice at the same time, the LTE connection is dropped and you fall back to using UMTS-3G, which allows simultaneous voice + 3G.

--

Hopefully all carriers will implement voice over LTE soon, and it won't matter.
I think technically the iPhone has four antennas and three full cellular chains.

- Rx upper and lower (fully separate transceiver to antenna, 2x simultaneous diversity on the receive).
- Tx upper and lower (switch probably after the filter ICs controls which antenna is used. 2x exclusive diversity on the transmit. The switch prefers the one with the better quality decided somehow (SNR?, Quality Index?)).

In order to support CDMA2000 voice and LTE simultaneously it would need yet another antenna, transceiver port, filter ICs, Power Amp. Basically a whole additional Tx chain. So you'd have to pack four entire cellular chains and five antennas into the phone to meet the minimum LTE+Voice requirements. This all in addition to 802.11 and BT antennas.

As you said above, because voice is still circuit switched and not packet switched in order to place a voice call you have to hand back to 3G. VoLTE will be a benefit to all technologies not only as a unifying force, but because WCDMA will not have to hand back down to circuit switched voice to place the call and can stay packet switched for both voice and data. Obviously for CDMA2000 based technologies packet switched voice has even more advantages.
 
- Presently very few operators are executing Carrier Aggregation. Australia will not have CA active until next year.

Admittedly lots of trials listed here but also a decent number of commercial deployments:

http://en.wikipedia.org/wiki/LTE_Advanced

e.g.

http://www.sktelecom.com/en/press/detail.do?idx=1075 said:
SK Telecom upgraded the existing LTE-Advanced which provides up to 150Mbps by applying Carrier Aggregation (CA) technology that binds 20MHz bandwidth in 1.8GHz band and 10MHz bandwidth in 800MHz band...

The company plans to set itself apart from two other local mobile carriers by providing 225Mbps LTE-Advanced service all across the nation from July 1 and continue to maintain its comparative advantage in service coverage.

- MDM9625 is the obvious choice for inclusion in the iPhone 6. It was quite clear this was going to happen immediately after the 5S was announced a year ago. MDM9635 is not ready for this scale of production yet unless major changes are made to its design (lithography for one).

Samsung have released two Cat 6 phones: the Galaxy Alpha using the Intel XMM7260 modem and the Galaxy S5 Broadband LTE-A featuring the Qualcomm MDM9635 modem.

The Snapdragon 810 SoC released early next year will feature an updated MDM9635 on-die that supports Cat 7 and support for aggregation of up to three 20MHz LTE carriers.

Presumably Qualcomm's 64-bit Krait-based SoC that will likely feature in the autumn 2015 Galaxy Note 5 (and compete against the iPhone 6S) will support Convergence Carrier Aggregation across FDD and TDD LTE modes. There are lots of operators who have broad spectrum holdings:

http://pr.huawei.com/en/news/hw-327910-ictca.htm said:
Vodafone currently has 800MHz, 1800MHz, and 2600MHz, a total of 50MHz spectrum bandwidth in FDD mode and 20MHz of 2600MHz in TDD mode in Spain.
 
So 300MB/s is easily accomplished then. 8bits/byte but still has a start and stop bit... just like I2C communication. so 10 bits/byte at 6Gbps is 600MB/s. Blows those ordinary HDD away. lol... I misread your original post too... I thought you said a SSD can't do 300MB/s... lol

...What part of "SATA is just a storage medium, it has nothing to do with the speed of the actual drive" didn't you get?

You can have a Gen 6 Fibre Channel capable of 25.6 GB/s, but if the device you connect it to is only capable of 256 KB/s, well, the theoretical speed doesn't mean very much, does it?

I originally said "many SSDs still can't achieve this speed, and no HDDs can."
 
Admittedly lots of trials listed here but also a decent number of commercial deployments:
Lots of operators are running limited trials with temporary or early spectrum leases. Several have deployed CA in halo markets (such as downtown), although this only sort of counts. Very few operators actually have a substantial deployment of CA at this stage. Not that I begrudge anybody not having deployed CA, it is just an interesting perspective point to have when thinking about the decisions these manufacturers make when they build handsets.

Samsung have released two Cat 6 phones: the Galaxy Alpha using the Intel XMM7260 modem and the Galaxy S5 Broadband LTE-A featuring the Qualcomm MDM9635 modem.
Both of which are fairly low quantity phones. One has just been released and is not a flagship, the other is only available in limited markets. The prime movers of the handset business are still the iPhone and the Samsung Galaxy S5 (original). SGS5 is all 28nm silicon, the iPhone 6 will probably be too.

The Snapdragon 810 SoC released early next year will feature an updated MDM9635 on-die that supports Cat 7 and support for aggregation of up to three 20MHz LTE carriers.

Presumably Qualcomm's 64-bit Krait-based SoC that will likely feature in the autumn 2015 Galaxy Note 5 (and compete against the iPhone 6S) will support Convergence Carrier Aggregation across FDD and TDD LTE modes. There are lots of operators who have broad spectrum holdings:
So this holds broadly true of the cadence I advised in an earlier post. the 20nm device wave is coming next year. So Apple will either entirely miss out on 20nm this year, or they will use TSMC and bet big on the SoC. Apple is always conservative with the radio side and I don't expect them to change this year.


Needless to say, never a dull moment in tech! New lithography, new SoCs, new handsets, new wireless standards. Good stuff.
 
I think technically the iPhone has four antennas and three full cellular chains.

Physically the iPhone only has two antennas, one at the top, one at the bottom, which is the minimum needed for LTE:

iphone_5s_antennas.png

Logically, it has three Rx paths and one shared Tx path:

  • In Rx mode, one is used for CDMA/GSM/UMTS, while both are used for LTE MIMO. In Verizon LTE mode, one antenna serves double duty to watch for incoming CDMA 1x pages.
  • In Tx mode, one antenna is used, picked for best quality, as you said.

In order to support CDMA2000 voice and LTE simultaneously it would need yet another antenna, transceiver port, filter ICs, Power Amp.

I believe that Apple uses (or did use) discrete parts for filters and amps, so perhaps only a transceiver and antenna.

As you said, though, it's clear that Apple has chosen form over function, and would rather not make the space for it. Hmm. Perhaps with the larger iPhone 6 models?
 
Why would they place the primary antenna at the bottom of the phone, where your hand is most likely to be? If you're in a fringe area, simply holding it could be the difference between having a signal and having nothing... I'm not engineer, but it never seemed like a good idea to place antennas there.
 
Why would they place the primary antenna at the bottom of the phone, where your hand is most likely to be? If you're in a fringe area, simply holding it could be the difference between having a signal and having nothing... I'm not engineer, but it never seemed like a good idea to place antennas there.

Most phones with multiple antennas these days put their primary antenna in the chin, because it's furthest away from your brain.

However, they can usually receive on either antenna, as the need arises. On top of that, the iPhone in particular can use either antenna to transmit on.

Interestingly, the Verizon iPhone 4 didn't have the same antenna problem as the other iPhone 4 models did, because Verizon had previously mandated two antennas on all its main phones. The Verizon design was carried over to the next iPhone model for everyone. When people talk about carriers meddling with phone designs, sometimes it's for the best.
 
Would the distance between the two ends really affect your brain enough?

I've read that a distance of even two inches can cut the amount of electromagnetic radiation by 75%.

Governments of the world set a specific limit on the maximum amount of RF energy that we absorb via heating (the SAR rating - Specific Absorption Rate). For instance, in the US it is 1.6 watts per kilogram.

Our adult skulls apparently do a fair job of blocking some/most of the radiated energy. However, we all have openings in our skull near to where we place cell phones, and that's our ear canals. The phones with the lowest SARs are usually flip-phones, sliders, and really long candy bar shapes... all with their antenna as far down as possible away from the ear opening.

Note that it's not the radiated energy itself that's dangerous (it's not like nuclear radiation), but its heating effect. Many people suggest that young kids should not spend a lot of time talking on cell phones, because their skulls are thinner, and have more fluid that can heat up. And of course we all know to keep cell phones away from men's genitals, because the heating can destroy sperm production.

.
 
Last edited:
This is not true. It seems to confuse several different radio concepts.
Sorry, while somewhat simplified - its remains correct. When transmitting, you attempt to utilise your granted bandwidth to its maximum. There are multiple different methods of sharing this bandwidth across multiple users (because in reality this is required) - but they all have the same inherent effect - your station uses a smaller fraction of the bandwidth. Its simply easier to demonstrate using a more naive and simplified approach - rather than also having to detail multiplexing approaches, which we only use to reduce collision induced waste. Note that the example given completely disregarded this. And if you know your theory properly, you should know just how much is lost without these aids.

The theoretical bandwidth is measured with the assumption of a 100% transmit time over the entire channel bandwidth. Again - as more devices enter the same base channel - competition increases over available space and time, and devices are required to wait more. That directly cuts down on how many bits they can put through the channel. Again, I remind you of the effect 'space' has - and if you don't know it, you at the very minimum should familiarise yourself with Shannon's theorem.

Modulation does come into play when you need to calculate how much time it takes an individual station to push its dataset to the channel - as previously we also assumed that this was perfect (obviously in reality its not).

But this also gets us into the subject of power efficiency vs. channel efficiency. QAM256 is bandwidth efficient, but power inefficient. In order to maintain a reasonable DMIN, you need to put through a lot of power. But a lot of equipment can't push the amount of power needed to go anything but very short distances. After that they have to switch to less efficient coding schemes and modulation in order to maintain a sufficient low bit error rate. Try using an AC equipped laptop for example, if your equipment is any good you can poll what modulation and coding scheme its using on at least the upstream or downstream. If its better, you can get both.

You should notice QAM256 being dropped off very quickly in favour of QAM64, and before doing that, the coding rate decreasing quickly with distance.

The net effect of course, is that my original example assumed a most optimal and most favourable condition - and still came with harsh results. The reality, which imposes further limits and issues - gets even worse results.
 
Last edited:
YOU may not need that sort of bandwidth but certainly a lot of people do, it's like saying "why do people need cars because I can get to work fine on the train.

----------



Gary, you missed my point - because bo that's not what I'm saying, actually a better criticism would've been more along the lines of why use a diesel car instead a petrol one. Please do tell me what it is you need more than about 24Mbps for, even via tethering?? Because as someone who operates various webservices, connects remotely (via wifi and cell) to my home and racked machines I've never needed that kind of bandwidth and have never known others to either.
 
YOU may not need that sort of bandwidth but certainly a lot of people do, it's like saying "why do people need cars because I can get to work fine on the train.

----------



Gary, you missed my point - because bo that's not what I'm saying, actually a better criticism would've been more along the lines of why use a diesel car instead a petrol one. Please do tell me what it is you need more than about 24Mbps for, even via tethering?? Because as someone who operates various webservices, connects remotely (via wifi and cell) to my home and racked machines I've never needed that kind of bandwidth and have never known others to either.

4K Netflix ?
http://recombu.com/digital/news/netflix-confirms-house-of-cards-in-4k-for-uk_M13135.html That's around 17Mbps plus when you have a household with multiple users who want to stream 1080p + 4k at the same time, then bandwidth IS needed.

And No, the Petrol/Diesel analogy doesn't fit, stop being so blinkered and think that other users around you may have DIFFERENT use cases :).

I also operate web services and remotely connected users (via VPN) and can certainly say we've had 1 user use up 20mbps in one go.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.